Abstract
This chapter argues that social media robots, more commonly known as “bots,” are becoming a formative tool of online radicalization. Besides pushing targeted and personalized content perceived by users as more persuasive, algorithmic innovations in “affect recognition” have given artificially intelligent agents the ability to exploit the emotional state of social media users. On encrypted social media channels, such social media bots are especially useful to extremist groups which lack human and financial resources to enact large-scale psychological warfare campaigns. As newer generations of social bots grow their ability to read and respond to human emotions and, in turn, increase their anthropomorphic tendencies, we argue that these automated headhunters will play a dominant role in online radicalization. This chapter undertakes a qualitative study of the affective role of conversational AI in establishing an emotional relationship with potential recruits in online radicalization. We also assess current efforts by Western security agencies, social media companies, and academic researchers to counter online radicalization strategies by extremist organizations. Our chapter points to the need of greater cross-platform multi-disciplinary research, study of bots that operate in foreign languages, and detection of extremist content in multimodal forms such as memes, music, videos, and selfies.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
A prime example of distributed dissent (although not “physically” violent) is “Operation Payback” in 2008. In retaliation for VISA and MASTERCARD companies blocking of payments to WikiLeaks, the hacker collective Anonymous initiated a series of distributed denial-of-service (DDoS) attacks against the credit card companies’ websites (Sauter, 2014). In this case, unaffiliated hackers from all over the globe using a botnet flooded VISA and MASTERCARD sites with so much traffic that it caused their respective servers to shut down.
References
Acker, A. (2021). Social media researchers must demand more transparent data access. Morning Consult. Retrieved January 18, 2022, from https://morningconsult.com/opinions/social-media-researchers-must-demand-more-transparent-data-access/
Agnihotri, M., Pooja Rao, S. B., Jayagopi, D. B., Hebbar, S., Rasipuram, S., Maitra, A., & Sengupta, S. (2021). Towards generating topic-driven and affective responses to assist mental wellness. In A. Del Bimbo, R. Cucchiara, S. Sclaroff, et al. (Eds.), Pattern recognition. ICPR international workshops and challenges (pp. 129–143). Springer International Publishing.
Alba, D. (2020). Pro-China misinformation group continues spreading messages, researchers say. Retrieved December 18, 2022, from https://www.nytimes.com/live/2020/2020-election-misinformation-distortions#facebook-sent-flawed-data-to-misinformation-researchers
Albadi N, Kurdi M, Mishra S. (2019). Hateful People or Hateful Bots? Detection and Characterization of Bots Spreading Religious Hatred in Arabic Social Media. Proc ACM Hum-Comput Interact 3 (CSCW): Article 61. https://doi.org/10.1145/3359163
Amarasingam, A., Maher, S., & Winter, C. (2021). How Telegram disruption impacts jihadist platform migration. Retrieved January 17, 2022, from https://d1wqtxts1xzle7.cloudfront.net/65377645/21_002_01e-with-cover-page-v2.pdf?Expires=1642403235&Signature=NWABHuAesZihAlCoBEf5cjrTkQcQyfRnGuUYFPXXNF0YW3XKfCUt77P~mEFyf8vpDlQOdTzxBA2uhsz9iKzaMxv-~EfIC9gk66kLieWwLccjmg4Vp~In9f7Aj7hDr9wsYrF4CkwIwX54DbDrAyrzEJJ8pj4OLcRlYKyQTS6eYMsH-MYFerJSkzKM0PVF1ltv~cmOaG-VxaU~g~tzFtTYLf0-r6JHeW420Zph9c~m0Mi7hUlNMWGrWbN9GxHrZO6Vh8um7IPn7sJPd23EU32KWbEkNPQ~cEtxukARW956JY62kNqVl9MQlkIBsPaJhparLaEwuqoJsJ42TS59K4tIxw__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA
Araque, O., & Iglesias, C. A. (2020). An approach for radicalization detection based on emotion signals and semantic similarity. IEEE Access, 8, 17877–17891. https://doi.org/10.1109/ACCESS.2020.2967219
Ayad, M., Amarasingam, A., & Alexander, A. (2021). The cloud caliphate: Archiving the Islamic state in real-time, Institute for Strategic Dialogue (IST). Special Report (May 2021). Retrieved December 19, 2021, from https://www.isdglobal.org/isd-publications/the-cloud-caliphate-archiving-the-islamic-state-in-real-time/
Bakir, V., & McStay, A. (2021). Empathic media, emotional AI, and the optimization of disinformation. In M. Boler & E. Davis (Eds.), Affective politics of digital media: Propaganda by other means. Routledge.
Bartlett, J., Birdwell, J., & King, M. (2010). The edge of violence: A radical approach to extremism. Demos, 5–75.
Bastos, M., & Mercea, D. (2018). The public accountability of social platforms: Lessons from a study on bots and trolls in the Brexit campaign. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128), 20180003.
Bell, C., & Coleman, A. (2018). Khashoggi: Bots feed Saudi support after disappearance. Retrieved January 17, 2022, from https://www.bbc.com/news/blogs-trending-45901584
Bessi, A., & Ferrara, E. (2016). Social bots distort the 2016 US Presidential election online discussion. First Monday, 21(11–7). https://doi.org/10.5210/fm.v21i11.7090
Bloom, M., & Daymon, C. (2018). Assessing the future threat: ISIS’s virtual caliphate. Orbis, 62(3), 372–388. https://doi.org/10.1016/j.orbis.2018.05.007
Bock, F. (2013). Sgt. Star goes mobile, prospects get answers to questions anywhere, anytime. Retrieved January 17, 2020, from https://www.army.mil/article/103582/sgt_star_goes_mobile_prospects_get_answers_to_questions_anywhere_any_time
Bodó, B., Helberger, N., & de Vreese, C. H. (2017). Political micro-targeting: A Manchurian candidate or just a dark horse? Internet Policy Review, 6(4), 1–13. https://doi.org/10.14763/2017.4.776
Boshmaf, Y., Muslukhov, I., Beznosov, K., & Ripeanu, M. (2011). The socialbot network: when bots socialize for fame and money. Paper presented at the Proceedings of the 27th Annual Computer Security Applications Conference, Orlando, Florida, USA.
Bradshaw, S., & Howard, P. N. (2019). The global disinformation order 2019: Global inventory of organised social media manipulation. Oxford Internet Institute. Retrieved January 17, 2022, from https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/09/CyberTroop-Report19.pdf
Cafarella, J., Wallace, B., & Zhou, J. (2019). ISIS’S second comeback assessing the next ISIS insurgency. Institute for the Study of War. Retrieved January 17, 2022, from http://www.jstor.org/stable/resrep19572
Cherney, A., & Belton, E. (2021). Evaluating case-managed approaches to counter radicalization and violent extremism: An example of the proactive integrated support model (PRISM) Intervention. Studies in Conflict & Terrorism, 44(8), 625–645.
Corera, G. (2020). ISIS ‘still evading detection on Facebook’, report says. Retrieved January 18, 2022, from https://www.bbc.com/news/technology-53389657
Dadson, N., Snoddy, I., & White, J. (2021). Access to big data as a remedy in big tech. Competition Law Journal, 20(1), 1–10.
Darcy, A., Daniels, J., Salinger, D., Wicks, P., & Robinson, A. (2021). Evidence of human-level bonds established with a digital conversational agent: Cross-sectional, retrospective observational study. JMIR Form Res, 5(5), e27868. https://doi.org/10.2196/27868
Debre, I., & Akram, F. (2021). Facebook’s language gaps weaken screening of hate, terrorism. Retrieved January 22, 2022, from https://apnews.com/article/the-facebook-papers-language-moderation-problems
Deibert, R. J. (2019). The road to digital unfreedom: Three painful truths about social media. Journal of Democracy, 30(1), 25–39.
Dennis, A. R., Kim, A., Rahimi, M., & Ayabakan, S. (2020). User reactions to COVID-19 screening chatbots from reputable providers. Journal of the American Medical Informatics Association, 27(11), 1727–1731. https://doi.org/10.1093/jamia/ocaa167
Destephe, M., Brandao, M., Kishi, T., Zecca, M., Hashimoto, K., & Takanishi, A. (2015). Walking in the uncanny valley: importance of the attractiveness on the acceptance of a robot as a working partner. Frontiers in Psychology, 6. https://doi.org/10.3389/fpsyg.2015.00204
Dickson, E. J. (2021). Proud Boys channels are exploding on Telegram. Retrieved January 18, 2022, from https://www.rollingstone.com/culture/culture-news/proud-boys-telegram-far-right-extremists-1114201/
Egypt today staff. (2019). Muslim Brotherhood, IS bots exploit Egypt protest hashtags. Retrieved January 18, 2022, from https://www.egypttoday.com/Article/1/75221/Muslim-Brotherhood-IS-bots-exploit-Egypt-protest-hashtags
Ferrara, E. (2017). Contagion dynamics of extremist propaganda in social networks. Information Sciences, 418–419, 1–12. https://doi.org/10.1016/j.ins.2017.07.030
Fisher, M., & Taub, A. (2018). How everyday social media users become real-world extremists. Retrieved October 17, 2022, from https://www.nytimes.com/2018/04/25/world/asia/facebook-extremism.html
Frenkel, S., & Feuer, A. (2021). ‘A total failure’: The Proud Boys now mock Trump. Retrieved January 19, 2022, from https://www.nytimes.com/2021/01/20/technology/proud-boys-trump.html
Gehl, R. W., & Bakardjieva, M. (2016). Socialbots and their friends: Digital media and the automation of sociality. Taylor & Francis.
de Gennaro, M., Krumhuber, E. G., & Lucas, G. (2020). Effectiveness of an empathic chatbot in combating adverse effects of social exclusion on mood. Frontiers in Psychology, 10. https://doi.org/10.3389/fpsyg.2019.03061
Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2), 2053951720943234.
Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1), 2053951719897945.
Grimme, C., Assenmacher, D., & Adam, L. (2018). Changing perspectives: Is it sufficient to detect social bots? In G. Meiselwitz (Ed.), Social computing and social media. User experience and behavior. SCSM 2018. Springer. https://doi.org/10.1007/978-3-319-91521-0_32
Haq, H., Shaheed, S., & Stephan, A. (2020). Radicalization through the lens of situated affectivity. Frontiers in Psychology, 11. https://doi.org/10.3389/fpsyg.2020.00205
Herman, E. S., & Chomsky, N. (1988). Manufacturing consent: The political economy of the mass media. Pantheon.
Himelein-Wachowiak, M., Giorgi, S., Devoto, A., Rahman, M., Ungar, L., Schwartz, H. A., Epstein, D. H., Leggio, L., & Curtis, B. (2021). Bots and misinformation spread on social media: Implications for COVID-19. Journal of Medical Internet Research, 23(5), e26933.
Ho, M.-T., Mantello, P., Nguyen, H.-K. T., & Vuong, Q.-H. (2021). Affective computing scholarship and the rise of China: A view from 25 years of bibliometric data. Humanities and Social Sciences Communications, 8(1), 282. https://doi.org/10.1057/s41599-021-00959-8
Horsch, S. (2014). Making salvation visible. Rhetorical and visual representations of martyrs in salafī jihadist media. In S. H.-A. Saad & S. Dehghani (Eds.), Martyrdom in the modern middle east (pp. 141–166). Ergon-Verlag.
Hotez, P. J. (2020). Anti-science extremism in America: escalating and globalizing. Microbes and Infection, 22(10), 505–507. https://doi.org/10.1016/j.micinf.2020.09.005
Howard, P. N. (2020). Lie machines: How to save democracy from troll armies, deceitful robots, junk news operations, and political operatives. Yale University Press.
iN2. (2018). The envoy and the bot: Tangibility in Daesh’s online and offline recruitment. Retrieved January 18, 2022, from https://thescli.org/the-envoy-and-the-bot-tangibility-in-daeshs-online-and-offline-recruitment/
ISIS Watch. (2022). ISIS watch telegram channel. Retrieved January 17, 2022, from https://t.me/s/isiswatch
Jain, M., Kumar, P., Kota, R., & Patel, S. N. (2018). Evaluating and informing the design of chatbots. Paper presented at the Proceedings of the 2018 Designing Interactive Systems Conference, Hong Kong, China.
Jhan, J. H., Liu, C. P., Jeng, S. K., & Lee, H. Y. (2021). CheerBots: Chatbots toward empathy and emotion using reinforcement learning. arXiv preprint arXiv: 2110.03949.
Kiela, D., Firooz, H., Mohan, A., Goswami, V., Singh, A., Ringshia, P., & Testuggine, D. (2020). The hateful memes challenge: Detecting hate speech in multimodal memes. In Advances in neural information processing systems. MIT Press.
Kitchin, R., & Dodge, M. (2014). Code/space: Software and everyday life. MIT Press.
Konijn, E. A., & Hoorn, J. F. (2017). Parasocial interaction and beyond: Media personae and affective bonding. The international encyclopedia of media effects, 1–15. John Wiley & Sons.
Kretzschmar, K., Tyroll, H., Pavarini, G., Manzini, A., Singh, I., & Group NYPsA. (2019). Can your phone be your therapist? Young people’s ethical perspectives on the use of fully automated conversational agents (chatbots) in mental health support. Biomedical Informatics Insights, 11, 1178222619829083.
Lee, K., Caverlee, J., & Webb, S. (2010). Uncovering social spammers: Social honeypots + machine learning. Paper presented at the Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval, Geneva, Switzerland.
Lowell, H. (2021). Trump called aides hours before Capitol riot to discuss how to stop Biden victory. Retrieved January 2, 2022, from https://www.theguardian.com/us-news/2021/nov/30/donald-trump-called-top-aides-capitol-riot-biden
MacDonald, S., Correia, S. G., & Watkin, A.-L. (2019). Regulating terrorist content on social media: automation and the rule of law. International Journal of Law in Context, 15(2), 183–197. https://doi.org/10.1017/S1744552319000119
Malmgren, E. (2017). Don’t feed the trolls. Dissent, 64(2), 9–12.
Mantello, P. (2021). Fatal portraits: The selfie as agent of radicalization. Sign Systems Studies, 49(3–4), 566–589. https://doi.org/10.12697/SSS.2021.49.3-4.16.
Mantello, P., Ho, M.-T., Nguyen, M.-H., & Vuong, Q.-H. (2021). Bosses without a heart: Socio-demographic and cross-cultural determinants of attitude toward Emotional AI in the workplace. AI & SOCIETY. https://doi.org/10.1007/s00146-021-01290-1
Marcellino, W., Magnuson, M., Stickells, A., Boudreaux, B., Helmus, T. C., & Geist, E., & Winkelman, Z. (2020). Counter-radicalization bot research using social bots to fight violent extremism. Rand Corp. Retrieved January 15, 2022, from https://apps.dtic.mil/sti/pdfs/AD1111251.pdf
McCants, W. (2015). The ISIS apocalypse: The history, strategy, and doomsday vision of the Islamic State. Macmillan.
McCauley, C., & Moskalenko, S. (2008). Mechanisms of political radicalization: Pathways toward terrorism. Terrorism and political violence, 20(3), 415–433.
Mehra, V. (2021). The age of the bots. Retrieved December 30, 2021, from https://www.linkedin.com/pulse/age-bots-vipul-mehra/?trk=articles_directory
Meleagrou-Hitchens, A., Alexander, A., & Kaderbhai, N. (2017). The impact of digital communications technology on radicalization and recruitment. International Affairs, 93(5), 1233–1249.
Molla, R. (2021). Why right-wing extremists’ favorite new platform is so dangerous. Vox. Retrieved January 18, 2022, from https://www.vox.com/recode/22238755/telegram-messaging-social-media-extremists
Mueen, A., Chavoshi, N., & Minnich, A. (2019). Taming social bots: Detection, exploration and measurement. Paper presented at the Proceedings of the 28th ACM International Conference on Information and Knowledge Management, Beijing, China.
Mustafaraj, E., & Metaxas, P. T. (2017). The fake news spreading plague: Was it preventable? Paper presented at the Proceedings of the 2017 ACM on Web Science Conference, Troy, New York, USA.
Ng, M., Coopamootoo, K. P., Toreini, E., Aitken, M., Elliot, K., & van Moorsel, A. (2020, September). Simulating the effects of social presence on trust, privacy concerns & usage intentions in automated bots for finance. In 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW) (pp. 190–199). IEEE.
Orabi, M., Mouheb, D., Al Aghbari, Z., & Kamel, I. (2020). Detection of bots in social media: A systematic review. Information Processing & Management, 57(4), 102250. https://doi.org/10.1016/j.ipm.2020.102250
Papacharissi, Z. (2016). Affective publics and structures of storytelling: sentiment, events and mediality. Information, Communication & Society, 19(3), 307–324. https://doi.org/10.1080/1369118X.2015.110969
Pariser, E. (2011). The filter bubble: How the new personalized web is changing what we read and how we think. Penguin.
Pashentsev, E. (2020). Strategic communication in EU-Russia relations. In E. Pashentsev (Ed.), Strategic communication in EU-Russia relations: Tensions, challenges and opportunities. Palgrave Macmillan.
Pashentsev, E., & Bazarkina, D. (2022). The malicious use of AI against government and political institutions in the psychological arena. In D. N. Bielicki (Ed.), Regulating artificial intelligence in industry. Routledge.
Possati, L. M. (2022). Psychoanalyzing artificial intelligence: The case of Replika. AI & SOCIETY, 1–14.
Pozzana, I., & Ferrara, E. (2020). Measuring bot and human behavioral dynamics. Frontiers in Physics, 8. https://doi.org/10.3389/fphy.2020.00125
Ramalingam, D., & Chinnaiah, V. (2018). Fake profile detection techniques in large-scale online social networks: A comprehensive review. Computers & Electrical Engineering, 65, 165–177. https://doi.org/10.1016/j.compeleceng.2017.05.020
Roberts, S. T. (2019). Behind the screen. Yale University Press.
Sauter, M. (2014). The coming swarm: DDOS actions, hacktivism, and civil disobedience on the Internet. Bloomsbury Publishing USA.
Scott, P. (2021). Whistleblower: Facebook is misleading the public on progress against hate speech, violence, misinformation. Retrieved January 18, 2022, from https://www.cbsnews.com/news/facebook-whistleblower-frances-haugen-misinformation-public-60-minutes-2021-10-03/
Scott, M., & Nguyen, T. (2021). Jihadists flood pro-Trump social network with propaganda. Retrieved January 17, 2022, from https://www.politico.com/news/2021/08/02/trump-gettr-social-media-isis-502078
Seering, J., Flores, J. P., Savage, S., & Hammer, J. (2018). The social roles of bots: Evaluating impact of bots on discussions in online communities. Proc ACM Hum-Comput Interact 2 (CSCW): Article 157. https://doi.org/10.1145/3274426
Shuldiner, A. (2019). Chapter 8—Raising them right: AI and the internet of big things. In W. Lawless, R. Mittu, D. Sofge, I. S. Moskowitz, & S. Russell (Eds.), Artificial intelligence for the internet of everything (pp. 139–143). Academic Press. https://doi.org/10.1016/B978-0-12-817636-8.00008-9
Squire, M. (2021). Why do hate groups and terrorists love telegram? In E. Leidig (Ed.), The radical right during crisis: CARR Yearbook 2020/2021. ibidem Verlag, Stuttgart, pp. 223–228.
Stalinsky, S., & Sosnow, R. (2020). Jihadi use of bots on the encrypted messaging platform Telegram. Retrieved January 19, 2022, from https://www.memri.org/reports/jihadi-use-bots-encrypted-messaging-platform-telegram
Stella, M., Ferrara, E., & De Domenico, M. (2018). Bots increase exposure to negative and inflammatory content in online social systems. Proceedings of the National Academy of Sciences, 115(49), 12435–12440.
Thacker, E. (2004). Networks, swarms and multitudes. Life in the Wires: The C Theory Reader: 165–177. Retrieved October 18, 2021, from https://journals.uvic.ca/index.php/ctheory/article/view/14541/5388
van Stekelenburg, J. (2017). Radicalization and violent emotions. PS: Political Science & Politics, 50(4), 936–939. https://doi.org/10.1017/S1049096517001020
Van den Bos, K. (2018). Why people radicalize: How unfairness judgments are used to fuel radical beliefs, extremist behaviors, and terrorism. Oxford University Press.
Weimann, G., & Vellante, A. (2021). The dead drops of online terrorism: How jihadists use anonymous online platforms. Perspectives on Terrorism, 15(4). pp. 39–53.
Woolley, S. C., & Howard, P. N. (2018). Computational propaganda: Political parties, politicians, and political manipulation on social media. Oxford University Press.
Wright, J. L., Chen, J. Y. C., & Lakhmani, S. G. (2020). Agent transparency and reliability in human–robot interaction: The influence on user confidence and perceived reliability. IEEE Transactions on Human-Machine Systems, 50(3), 254–263. https://doi.org/10.1109/THMS.2019.2925717
York, J. (2021). Silicon values: The future of free speech under surveillance capitalism. Verso Books.
Acknowledgments
This study is part of the project “Emotional AI in Cities: Cross Cultural Lessons from UK and Japan on Designing for an Ethical Life” funded by JST-UKRI Joint Call on Artificial Intelligence and Society (2019). (Grant No. JPMJRX19H6). www.ethikal.ai
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Mantello, P., Ho, T.M., Podoletz, L. (2023). Automating Extremism: Mapping the Affective Roles of Artificial Agents in Online Radicalization. In: Pashentsev, E. (eds) The Palgrave Handbook of Malicious Use of AI and Psychological Security. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-031-22552-9_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-22552-9_4
Published:
Publisher Name: Palgrave Macmillan, Cham
Print ISBN: 978-3-031-22551-2
Online ISBN: 978-3-031-22552-9
eBook Packages: Political Science and International StudiesPolitical Science and International Studies (R0)