Abstract
We evaluated the convergent validity of the new language-independent EmojiGrid rating tool for the affective appraisal of perceived touch events. The EmojiGrid is a rectangular response grid, labeled with facial icons (emoji) that express different degrees of valence and arousal. We previously showed that participants can intuitively and reliably report their affective appraisal of different sensory stimuli (e.g., images, sounds, smells) by clicking on the EmojiGrid, without additional verbal instructions. However, because touch events can be bidirectional and are a dynamic expression of action, we cannot generalize previous results to the touch domain. In this study, participants (Nā=ā65) used the EmojiGrid to report their own emotions when looking at video clips showing different touch events. The video clips were part of a validated database that provided corresponding normative ratings (obtained with a 9-point SAM scale) for each clip. The affective ratings for inter-human touch obtained with the EmojiGrid show excellent agreement with the data provided in the literature (intraclass correlations of .99 for valence and .79 for arousal). For object touch events, these values are .81 and .18, respectively. This may indicate that the EmojiGrid is more sensitive to perspective (sender versus receiver) than classic tools. Also, the relation between valence and arousal shows the classic U-shape at the group level. Thus, the EmojiGrid appears to be a valid graphical self-report instrument for the affective appraisal of perceived touch events, especially for inter-human touch.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Next to serving us to discriminate material and object properties, our sense of touch also has hedonic and arousing qualities [1]. For instance, soft and smooth materials (e.g., fabrics) are typically perceived as pleasant and soothing, while stiff, rough, or coarse materials are experienced as unpleasant and arousing [1]. This affective component of touch plays a significant role in social communication. Interpersonal or social touch has a strong emotional valence that can either be positive (when expressing support, reassurance, affection or attraction: [2]) or negative (conveying anger, frustration, disappointment: [3]). Affective touch can profoundly influence social interactions [4]. For example, touch can lead to more favorable evaluations of the toucher [5], can persuade [6], and can regulate our physical and emotional well-being [7]. Since it is always reciprocal, social touch not only emotionally affects the receiver [7] but also the touch giver [8]. Touch is the primary modality for conveying intimate emotions [7, 9]. Current technological advances like the embodiment of artificial entities, the development of advanced haptic and tactile display technologies also afford mediated social touch [10]. To study the emotional impact of touch and to design effective haptic social communication systems, validated and efficient affective self-report tools are needed.
In accordance with the circumplex model of affect [11], the affective responses elicited by tactile stimuli vary mainly over the two principal affective dimensions of valence and arousal [12]. Most studies on the emotional response to touch apply two individual one-dimensional Likert scales [13] or SAM (Self-assessment Mannikin: [14]) scales [15] to measure both affective dimensions separately. Although the SAM is a validated and widely used tool, it also has some practical drawbacks. People often fail to understand the emotions it depicts [16]. While the SAMās valence dimension is quite intuitive (a facial expression going from a frown to a smile), its arousal dimension (which looks like an āexplosionā in the figureās stomach) is often misunderstood [16], also in the context of affective touch [15]. Hence there is a need for new rating scales to measure the subjective quality of affective touch [17].
We developed a new intuitive and language-independent self-report instrument called the EmojiGrid (see Fig.Ā 1): a rectangular response grid labeled with facial icons (emoji) expressing different levels of valence (e.g., angry face vs. smiling face) and arousal (e.g., sleepy face vs. excited face) [16]. We previously found that participants can intuitively and reliably report their affective response with a single click on the EmojiGrid, even without verbal instructions [16]. This suggested that the EmojiGrid might also be a general instrument to assess human affective responses.
In this study, we evaluated the convergent validity of the EmojiGrid as a self-report tool for the affective assessment of perceived touch events. We thereto used the EmojiGrid to measure perceived valence and arousal for various touch events in video clips from a validated affective image database, and we compared the results with the normative ratings that were obtained with a conventional validated affective rating tool (the SAM) and that are provided with this database. It appears that the brain activity patterns elicited by imagined, perceived and experienced (affective) touch are highly similar. To some extent, people experience the same touches as the ones they see: they have the ability to imagine how an observed touch would feel [18]. This affords the use video clips showing touch actions to study affective touch perception.
2 Methods and Procedure
2.1 Stimuli
The stimuli used in this study are all 75 video clips from the validated Socio-Affective Touch Expression Database (SATED: [15]). These clips represent 25 different dynamic touch events varying widely in valence and arousal. The interpersonal socio-affective touch events (Nā=ā13) show people hugging, patting, punching, pushing, shaking hands or nudging each otherās arm (e.g., Fig.Ā 2, left). The object-based (non-social) touch events (Nā=ā12) represent human-object interactions with motions that match those involved in the corresponding social touch events, and show people touching, grabbing, carrying, shaking or pushing objects like bottles, boxes, baskets, or doors (e.g., Fig.Ā 2, right). Each touch movement is performed three times (i.e., by three different actors or actor pairs) and for about three seconds, resulting in a total of 75 video clips. All video clips had a resolution of 640āĆā360 pixels.
2.2 Participants
English speaking participants were recruited via the Prolific database (https://prolific.ac). A total of 65 participants (43 females, 22 males) aged between 18 and 35 (Mā=ā25.7; SDā=ā5.0) participated in this study. The experimental protocol was reviewed and approved by the TNO Ethics Committee (Ethical Approval Ref: 2017ā011) and was in accordance with the Helsinki Declaration of 1975, as revised in 2013 [19]. Participation was voluntary. After completing the study, all participants received a small financial compensation for their participation.
2.3 Measures
Demographics.
Participants reported their age, gender, and nationality.
Valence and Arousal.
Valence and arousal were measured with the EmojiGrid (see Fig.Ā 1; this tool was first introduced in [16]. The EmojiGrid is a square grid that is labeled with emoji showing different facial expressions. Each side of the grid is labeled with five emoji, and there is one (neutral) emoji located in its center. The facial expressions of the emoji along a horizontal (valence) axis vary from disliking (unpleasant) via neutral to liking (pleasant), and their expression gradually increases in intensity along the vertical (arousal) axis. Users can report their affective state by placing a checkmark at the appropriate location on the grid.
2.4 Procedure
The experiment was performed as an (anonymous) online survey created with the Gorilla experiment builder [20]. The participants viewed 75 brief video clips showing a different touch event and rated for each video how the touch would feel. First, the participants signed an informed consent and reported their demographic variables. Next, they were introduced to the EmojiGrid response tool and were told how they could use this tool to report their affective rating for each perceived touch event. The instructions stated: āClick on a point inside the grid that best matches how you think the touch event feelsā. No further explanation was given. Then they performed two practice trials to familiarize themselves with the EmojiGrid and its use. The actual experiment started directly after these practice trials. The video clips were presented in random order. The rating task was self-paced without imposing a time-limit. After seeing each video clip, the participants responded by clicking on the EmojiGrid (see Fig.Ā 1). Immediately after responding the next video clip appeared. On average the experiment lasted about 10Ā min.
Data Analysis
IBM SPSS Statistics 25 (www.ibm.com) for Windows was used to perform all statistical analyses. Intraclass correlation coefficient (ICC) estimates and their 95% confident intervals were based on a mean-rating (kā=ā3), absolute agreement, 2-way mixed-effects model [21]. ICC values less than .5 are indicative of poor reliability, values between .5 and .75 indicate moderate reliability, values between .75 and .9 indicate good reliability, while values greater than .9 indicate excellent reliability [21]. For all other analyses, a probability level of pā<ā.05 was considered to be statistically significant.
For each of the 25 touch scenarios, we computed the mean valence and arousal responses over all three of its representations and over all participants. We used Matlab 2019a (www.mathworks.com) to investigate the relation between the (mean) valence and arousal ratings and to plot the data. The Curve Fitting Toolbox (version 3.5.7) in Matlab was used to compute a least-squares fit of a quadratic function to the data points.
3 Results
For each touch-scenario, the mean and standard deviation response for valence and arousal was computed over each of its three representations and over all participants.
To quantify the agreement between the ratings obtained with the EmojiGrid (present study) and with the 9-point SAM scales [15] we computed Intraclass Correlation Coefficient (ICC) estimates with their 95% confidence intervals for the mean valence and arousal ratings between both studies, both for all touch events, and for social and non-social events separately (see TableĀ 1). For all touch events, and for social touch events in particular, the valence ratings show excellent reliability and the arousal ratings show good reliability. For object-based touch events, the valence ratings also show good reliability, but the arousal ratings show poor reliability.
FigureĀ 3 shows the correlation plots between the mean valence and arousal ratings obtained with the EmojiGrid in this study and those obtained with the SAM in [15]. This figure shows that the mean valence ratings for all touch events closely agree between both studies: the original classification [15] into positive, negative and neutral scenarios also holds in this result. A Mann-Whitney U test revealed that mean valence was indeed significantly higher for the positive scenarios (Mdnā=ā7.27, MADā=ā.31, nā=ā6) than for the negative ones (Mdnā=ā2.77, MADā=ā.35, nā=ā6), Uā=ā0, zā=āā2.88, pā=ā.004, rā=ā.58 (large effect size). Additionally, mean valence differed significantly between positive social touch scenarios and object-based touch scenarios (Mdnā=ā5.13 MADā=ā.35, nā=ā12), Uā=ā0, zā=āā3.37, pā=ā0.001) and between negative social touch scenarios and object-based touch scenarios, Uā=ā0, zā=āā3.42, pā=ā0.001, rā=ā.68 (large effect size).
Whereas the mean valence ratings closely agree between both studies for all touch events, the arousal ratings only agree for the social touch events, but not for the object-based touch events. The mean arousal ratings for object-based touch events are consistently higher with the EmojiGrid than with the SAM. We also compared mean arousal between social touch and object touch. The results revealed that social touch was rated as more arousing (Mdnā=ā5.35, MADā=ā0.89) than object-based (non-social) touch (Mdnā=ā4.10, MADā=ā0.37, Uā=ā27.0, zā=āā2.77, pĀ ā¤Ā 0.009, rā=ā.55 (large effect size). Social touch is more arousing than object-based (non-social) touch. This agrees with Masson & Op de Beeckās conclusion that completely un-arousing social touch does not seem to exist [15].
FigureĀ 4 shows the relation between the mean valence and arousal ratings for all 25 different SATED scenarios, as measured with the EmojiGrid in this study and with a 9-point SAM scale in [15]. The curves represent least-squares quadratic fits to the data points. The adjusted R-squared values are respectively .75 and .90, indicating good fits. This figure shows that the relation between the mean valence and arousal ratings provided by both self-assessment methods is closely described by a quadratic (U-shaped) relation at the nomothetic (group) level.
All results are available at the OSF repository with https://doi.org/10.17605/osf.io/d8sc3.
4 Conclusion and Discussion
The affective (valence and arousal) ratings obtained with the EmojiGrid show good to excellent agreement with the data provided with the database for inter-human (social) touch events. Also, our results replicate the U-shaped (quadratic) relation between the mean valence and arousal ratings, as reported in the literature [15]. Thus, we conclude that the EmojiGrid appears to be a valid graphical self-report instrument for the affective appraisal of perceived social touch events. The agreement for object touch events is good for valence but poor for arousal. This may be related to the instruction to the participants to rate āhow the touch would feelā. Although not explicitly stated to do so, participants may have preferred the receiver perspective above the sender perspective of the interaction. This perspective has less meaning in the object touch events probably resulting in a switch to the sender perspective. Classic rating tools may be less sensitive to the different perspectives which is only relevant in touch events, hence the extremely low arousal scores with the SAM tool and the low intraclass correlations.
References
Essick, G.K., McGlone, F., Dancer, C., et al.: Quantitative assessment of pleasant touch. Neurosci. Biobehav. Rev. 34(2), 192ā203 (2010)
Jones, S.E., Yarbrough, A.E.: A naturalistic study of the meanings of touch. Commun. Monogr. 52(1), 19ā56 (1985)
Knapp, M.L., Hall, J.A.: Nonverbal Communication in Human Interaction, 7th edn. CENGAGE Learning, Boston/Wadsworth (2010)
Hertenstein, M.J., Verkamp, J.M., Kerestes, A.M., et al.: The communicative functions of touch in humans, nonhuman primates, and rats: a review and synthesis of the empirical research. Genet. Soc. Gen. Psychol. Monogr. 132(1), 5ā94 (2006)
Erceau, D., GuĆ©guen, N.: Tactile contact and evaluation of the toucher. J. Soc. Psychol. 147(4), 441ā444 (2007)
Crusco, A.H., Wetzel, C.G.: The midas touch: the effects of interpersonal touch on restaurant tipping. Pers. Soc. Psychol. Bull. 10(4), 512ā517 (1984)
Field, T.: Touch for socioemotional and physical well-being: a review. Dev. Rev. 30(4), 367ā383 (2010)
Gentsch, A., Panagiotopoulou, E., Fotopoulou, A.: Active interpersonal touch gives rise to the social softness illusion. Curr. Biol. 25(18), 2392ā2397 (2015)
Morrison, I., Lƶken, L., Olausson, H.: The skin as a social organ. Exp. Brain Res. 204(3), 305ā314 (2010)
Huisman, G.: Social touch technology: a survey of haptic technology for social touch. IEEE Trans. Haptics 10(3), 391ā408 (2017)
Russell, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39(6), 1161ā1178 (1980)
Hasegawa, H., Okamoto, S., Ito, K., et al.: Affective vibrotactile stimuli: relation between vibrotactile parameters and affective responses. Int. J. Affect. Eng. 18(4), 171ā180 (2019)
Salminen, K., Surakka, V., Lylykangas, J., et al.: Emotional and behavioral responses to haptic stimulation. In: SIGCHI Conference on Human Factors in Computing Systems (CHI 2008), pp. 1555ā1562. ACM, New York (2008)
Bradley, M.M., Lang, P.J.: Measuring emotion: the self-assessment manikin and the semantic differential. J. Behav. Ther. Exp. Psychiatry 25(1), 49ā59 (1994)
Lee Masson, H., Op de Beeck, H.: Socio-affective touch expression database. PLOS ONE, 13(1), e0190921 (2018)
Toet, A., Kaneko, D., Ushiama, S., et al.: EmojiGrid: a 2D pictorial scale for the assessment of food elicited emotions. Front. Psychol. 9, 2396 (2018)
Schneider, O.S., Seifi, H., Kashani, S., et al.: HapTurk: crowdsourcing affective ratings of vibrotactile icons. In: 2016 CHI Conference on Human Factors in Computing Systems, pp. 3248ā3260. ACM, New York (2016)
Keysers, C., Gazzola, V.: Expanding the mirror: vicarious activity for actions, emotions, and sensations. Curr. Opin. Neurobiol. 19(6), 666ā671 (2009)
World Medical Association: World Medical Association declaration of Helsinki: ethical principles for medical research involving human subjects. J. Am. Med. Assoc. 310(20), 2191ā2194 (2013)
Anwyl-Irvine, A., MassonniƩ, J., Flitton, A., et al.: Gorilla in our Midst: an online behavioral experiment builder. bioRxiv, 438242 (2019)
Koo, T.K., Li, M.Y.: A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J. Chiropractic Med. 15(2), 155ā163 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
Ā© 2020 The Author(s)
About this paper
Cite this paper
Toet, A., van Erp, J.B.F. (2020). The EmojiGrid as a Rating Tool for the Affective Appraisal of Touch. In: Nisky, I., Hartcher-OāBrien, J., Wiertlewski, M., Smeets, J. (eds) Haptics: Science, Technology, Applications. EuroHaptics 2020. Lecture Notes in Computer Science(), vol 12272. Springer, Cham. https://doi.org/10.1007/978-3-030-58147-3_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-58147-3_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58146-6
Online ISBN: 978-3-030-58147-3
eBook Packages: Computer ScienceComputer Science (R0)