Abstract
As rapid advancements in artificial intelligence (AI) continue to reshape various professional sectors, researchers are beginning to explore the potential of AI tools, such as OpenAI’s ChatGPT. This chapter delves into the utilization of AI for questionnaire item generation, underlining the importance of systematic and transparent approaches in line with preliminary guidelines offered by academic journals. We present a detailed case study on creating the AI Acceptability Instrument, which is designed to assess people’s acceptability of AI technology. Our approach demonstrates the use of AI as an assistive tool rather than a decision-maker, upholding the essential principles of transparency and theoretical grounding in the process. Guided by current journal guidelines and additional advice, we highlight how incorporating AI technologies such as ChatGPT can elevate the process of questionnaire item development. This chapter aims to provide researchers with a robust, systematic, and theory-driven framework for employing AI tools in research, thereby fostering a balanced utilization of AI in the field. We emphasize that, while AI holds transformative potential, its application must be tempered with critical awareness of its capabilities and limitations to ensure ethical and methodologically sound usage.
References
Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus, 15(2), e35179. https://doi.org/10.7759/cureus.35179
Alyami, M., Henning, M., Krägeloh, C. U., & Alyami, H. (2021). Psychometric evaluation of the Arabic version of the fear of COVID-19 Scale. International Journal of Mental Health and Addiction, 19(6), 2219–2232. https://doi.org/10.1007/s11469-020-00316-x
Antaki, F., Touma, S., Milad, D., El-Khoury, J., & Duval, R. (2023). Evaluating the performance of ChatGPT in ophthalmology: An analysis of its successes and shortcomings. Ophthalmology Science. https://doi.org/10.1016/j.xops.2023.100324
Bharatharaj, J., Kutty, S. K. S., Munisamy, A., & Krägeloh, C. U. (2022). What do members of Parliament in India think of robots? Validation of the Frankenstein syndrome questionnaire and comparison with other population groups. International Journal of Social Robotics, 14(9), 2009–2018. https://doi.org/10.1007/s12369-022-00921-x
Curtis, N. (2023). To ChatGPT or not to ChatGPT? The impact of artificial intelligence on academic publishing. The Pediatgric Infectious Disease Journal, 42(4), 275–275. https://doi.org/10.7759/cureus.35179
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albachrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., ... Wright, R. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642
Flanagin, A., Bibbins-Domingo, K., Berkwits, M., & Christiansen, S. L. (2023). Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge. The Journal of the American Medical Association, 329(8), 637–639. https://doi.org/10.1001/jama.2023.1344
García-Peñalvo, F. J. (2023). The perception of artificial intelligence in educational contexts after the launch of ChatGPT: Disruption or panic? Education in the Knowledge Society, 24, e31279. https://doi.org/10.14201/eks.31279
Haleem, A., Javaid, M., & Singh, R. P. (2022). An era of ChatGPT as a significant futuristic support tool: A study on features, abilities, and challenges. BenchCouncil Transactions on Benchmarks, Standards and Evaluations, 2(4), 100089. https://doi.org/10.1016/j.tbench.2023.100089
Haque, M. A. (2022). A brief analysis of “ChatGPT” – a revolutionary tool designed by OpenAI. EAI Endorsed Transactions on AI and Robotics, 1(1), e15. https://doi.org/10.4108/airo.v1i1.2983
Hassani, H., & Silva, E. S. (2023). The role of ChatGPT in data science: How AI-assisted conversational interfaces are revolutionizing the field. Big Data and Cognitive Computing, 7(2), 62. https://doi.org/10.3390/bdcc7020062
Homolak, J. (2023). Opportunities and risks of ChatGPT in medicine, science, and academic publishing: A modern Promethean dilemma. Croatian Medical Journal, 64(1), 1–3. https://doi.org/10.3325/cmj.2023.64.1
Krägeloh, C. U., Bharatharaj, J., Kutty, S. K. S., Nirmala, P. R., & Huang, L. (2019). Questionnaires to measure acceptability of social robots: A critical review. Robotics, 8(4), 88. https://doi.org/10.3390/robotics8040088
Krägeloh, C. U., Bharatharaj, J., Albo-Canals, J., Hannon, D., & Heerink, M. (2022). The time is ripe for robopsychology. Frontiers in Psychology, 13, 968382. https://doi.org/10.3389/fpsyg.2022.968382
Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S., & Wang, Z. (2023). ChatGPT and a new academic reality: Artificial intelligence-written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science and Technology, 74(5), 570–581. https://doi.org/10.1002/asi.24750
Medvedev, O. N., & Krägeloh, C. U. (2022). Rasch measurement model. In O. N. Medvedev, C. U. Krägeloh, R. J. Siegert, & N. N. Singh (Eds.), Handbook of assessment in mindfulness research. Springer. https://doi.org/10.1007/978-3-030-77644-2_4-1
Medvedev, O., & Krägeloh, C. (2023). Harnessing artificial intelligence for mindfulness research and dissemination: Guidelines for authors. Mindfulness, 14(5), 1–2. https://doi.org/10.1007/s12671-023-02155-y
Medvedev, O. N., Krägeloh, C. U., Siegert, R. J., & Singh, N. N. (Eds.). (2022). Handbook of assessment in mindfulness research. Springer.
Nature. (2023). Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature, 613(7945), 612–612. https://doi.org/10.1038/d41586-023-00191-1
Nomura, T., Suzuki, T., Kanda, T., & Kato, K. (2006). Measurement of negative attitudes toward robots. Interaction Studies, 7, 437–454. https://doi.org/10.1075/is.7.3.14nom
OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat
Parsons, H. M. (1985). Automation and the individual: Comprehensive and comparative views. Human Factors, 27(1), 99–111. https://doi.org/10.1177/001872088502700109
Persson, A., Laaksoharju, M., & Koga, H. (2021). We mostly think alike: Individual differences in attitude towards AI in Sweden and Japan. The Review of Socionetwork Strategies, 15(1), 123–142. https://doi.org/10.1007/s12626-021-00071-y
Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 3, 121–154. https://doi.org/10.1016/j.iotcps.2023.04.003
Schepman, A., & Rodway, P. (2020). Initial validation of the general attitudes towards artificial intelligence scale. Computers in Human Behavior Reports, 1, 100014. https://doi.org/10.1016/j.chbr.2020.100014
Schepman, A., & Rodway, P. (2022). The general attitudes towards artificial intelligence scale (GAAIS): Confirmatory validation and associations with personality, corporate distrust, and general trust. International Journal of Human–Computer Interaction. https://doi.org/10.1080/10447318.2022.2085400
Siegert, R. J., Krägeloh, C. U., & Medvedev, O. N. (2022). Classical test theory and the measurement of minfulness. In O. N. Medvedev, C. U. Krägeloh, R. J. Siegert, & N. N. Singh (Eds.), Handbook of assessment in mindfulness research. Springer. https://doi.org/10.1007/978-3-030-77644-2_3-1
Sindermann, C., Sha, P., Zhou, M., Wernicke, J., Schmitt, H. S., Li, M., Sariyska, R., Stavrou, M., Becker, B., & Montag, C. (2021). Assessing the attitude towards artificial intelligence: Introduction of a short measure in German, Chinese, and English Language. KI-Künstliche Intelligenz, 35(1), 109–118. https://doi.org/10.1007/s13218-020-00689-0
Syrdal, D. S., Nomura, T., & Dautenhahn, K. (2013). The Frankenstein syndrome questionnaire–Results from a quantitative cross-cultural survey. In International conference on social robotics (pp. 270–279). Springer.
Taecharungroj, V. (2023). “What can ChatGPT do?” Analyzing early reactions to the innovative AI chatbot on Twitter. Big Data and Cognitive Computing, 7(1), 35. https://doi.org/10.3390/bdcc7010035
Trivedi, A., Kaur, E. K., Choudhary, C., & Barnwal, P. (2023, March). Should AI technologies replace the human jobs? In 2023 2nd international conference for innovation in technology (INOCON) (pp. 1–6). IEEE.
Wilkinson, S., Ribeiro, L., Krägeloh, C. U., Bergomi, C., Parsons, M., Siegling, A., Tschacher, W., Kupper, Z., & Medvedev, O. N. (2023). Validation of the comprehensive inventory of mindfulness experiences (CHIME) in English using Rasch methodology. Mindfulness. https://doi.org/10.1007/s12671-023-02099-3
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
-
English version of the AI Acceptability Instrument (AIAI)
-
1.
The development of AI is blasphemous.
-
2.
I believe that AI can contribute positively to the creative industries, such as art, music, and writing.
-
3.
Widespread use of AI would take away jobs from people.
-
4.
I don’t know why, but AI scares me.
-
5.
Widespread use of AI would mean that it would be costly for us to maintain it.
-
6.
I feel that AI has the potential to enhance our quality of life by automating mundane tasks.
-
7.
I am concerned that AI might perpetuate existing inequalities by primarily benefiting the wealthy and powerful.
-
8.
Something bad might happen if AI developed human-like characteristics.
-
9.
I am worried that AI might lead to an erosion of human skills and knowledge.
-
10.
I feel overwhelmed and stressed when coping with the uncertainty and ambiguity of AI development and innovation.
-
11.
I believe that AI can significantly improve the efficiency of various industries.
-
12.
Widespread use of AI in society will make it less warm.
-
13.
AI can make our lives easier.
-
14.
I am worried that AI will make us lazier.
-
15.
I feel confident in my skills and abilities to compete with AI in the future.
-
16.
Technologies needed for the development of AI belong to scientific fields that humans should not study.
-
17.
People interacting with AI could sometimes lead to problems in relationships between people.
-
18.
I would hate the idea of AI making judgments about things.
-
19.
I can trust persons and organizations related to the development of AI.
-
20.
I would feel relaxed talking with AI.
-
21.
I believe that AI is crucial to solve complex global problems, such as climate change and disease outbreaks.
-
22.
I believe that AI can help us better understand human behavior and emotions.
-
23.
I have a hopeful outlook on the future impact of AI on society.
-
24.
I believe that AI can help us better understand and manage complex systems, such as ecosystems and economies.
-
25.
I would feel uneasy if AI really had emotions or independent thoughts.
-
26.
I regularly use AI capabilities in my daily life or work.
-
27.
I am concerned that AI would be a bad influence on children.
-
28.
I trust the information I receive about AI from media.
-
29.
I am concerned that AI will harm humanity and society.
-
30.
I would feel very nervous just interacting with AI.
-
31.
Persons and organizations related to the development of AI will consider the needs, thoughts, and feelings of their users.
-
32.
I feel that AI should be strictly regulated to ensure ethical and responsible development.
-
33.
I trust persons and organizations related to the development of AI to disclose sufficient information to the public, including negative information.
-
34.
I am afraid that AI will make us forget what it is like to be human.
-
35.
If AI causes accidents or trouble, persons and organizations related to its development should give sufficient compensation to the victims.
-
36.
I am worried that AI will become too powerful and unpredictable for humans to control and govern.
-
37.
I would feel nervous operating AI in front of other people.
-
38.
AI should perform repetitive and boring routine tasks instead of leaving them to people.
-
39.
I feel that if we become over-dependent on AI, something bad might happen.
-
40.
AI should perform dangerous tasks, for example in disaster areas, deep sea, and space.
-
41.
I evaluate the sources and credibility of information about AI critically and verify them.
-
42.
I feel that in the future, society will be dominated by AI.
-
43.
I think AI can enhance our decision-making processes by providing unbiased and data-driven insights.
-
44.
I am afraid that AI will encourage less interaction between humans.
-
45.
Something bad might happen if AI developed into living beings.
-
46.
I am comfortable using AI tools or applications in my work or personal life.
-
47.
I am worried that AI could lead to a loss of privacy for individuals.
-
48.
I am concerned that AI may be used for malicious purposes, such as cyber-attacks or surveillance.
-
49.
AI is a natural product of our civilization.
-
50.
I feel that in the future, society will be dominated by AI.
-
51.
I feel that if I depend on AI too much, something bad might happen.
-
52.
I am concerned that AI might contribute to the widening of the digital divide.
-
53.
I believe that humans will find effective ways to regulate AI development and deployment.
-
54.
AI can be very useful for teaching young kids.
-
55.
I resist AI as a negative force in my life and decision-making.
-
56.
If AI had emotions, I would be able to make friends with it.
-
57.
I trust AI to make fair and transparent decisions that affect me or others.
-
58.
I am concerned that AI might be used to manipulate public opinion or spread misinformation.
-
59.
I am worried that AI may reinforce existing biases and stereotypes in society.
-
60.
I often worry about losing my job or career prospects to AI.
-
61.
I am concerned that the benefits of AI may not be equitably distributed across society.
-
62.
I am worried that AI might lead to a loss of control over our own data and personal information.
-
63.
I consider AI too complex and technical for my needs.
-
64.
AI can be very useful for caring for the elderly and disabled.
-
65.
The development of AI is a blasphemy against nature.
-
66.
I treat AI agents or systems as tools or partners in my interactions with them.
-
67.
I would feel uneasy if AI really had emotions.
-
68.
I embrace AI as a positive force in my life and decision-making.
-
69.
I am worried that AI could increase social isolation by replacing human-to-human interactions.
-
70.
I would feel paranoid talking with AI.
-
71.
I feel confident and curious when coping with the uncertainty and ambiguity of AI development and innovation.
-
72.
I would feel uneasy if I was given a job where I had to use AI.
-
73.
I would hate the idea that AI was making judgments about things.
-
74.
I am familiar with different AI capabilities such as natural-language generation, computer vision, and robotic process automation.
-
75.
I like the idea that AI can augment and enhance human intelligence and creativity.
-
76.
I think AI can be biased and opaque when making decisions that affect me or others.
-
77.
I think AI has the potential to replace and surpass human intelligence and creativity.
-
78.
I am interested in learning more about AI and its applications for my personal and professional development.
-
79.
I feel comforted being with AI that has emotions.
-
80.
I believe that AI can help reduce social inequalities by providing better access to education and healthcare.
-
81.
I believe that AI can help empower individuals by providing them with personalized services and support.
-
82.
AI can create new forms of interactions both between humans and between humans and machines.
-
83.
Persons and organizations related to the development of AI are well-meaning.
-
84.
I am concerned that AI would be a bad influence on children.
-
85.
I feel that AI should be designed to prioritize human values, well-being, and dignity.
-
86.
I believe that AI can help us achieve a more sustainable and eco-friendly future.
-
87.
I treat AI agents or systems as rivals or enemies in my interactions with them.
-
88.
The term “AI” means nothing to me.
-
89.
I am aware of the dangers and risks associated with AI, such as job losses, social manipulation, surveillance, biases, inequality, ethics, and autonomous weapons.
-
90.
I feel that AI should be developed in a transparent and collaborative manner, involving input from diverse stakeholders.
-
91.
I am concerned that AI might exacerbate existing power imbalances between countries and corporations.
-
92.
I have a fearful outlook on the future impact of AI on society.
-
93.
I am concerned about the ethical and governance aspects of AI in health and medicine.
-
94.
I believe AI has the potential to improve the quality and accessibility of healthcare services.
-
95.
I don’t know why, but I like the idea of AI.
-
96.
I have concerns about the reliability, security, or privacy of AI tools or applications.
-
1.
Rights and permissions
Copyright information
© 2023 Springer Nature Switzerland AG
About this entry
Cite this entry
Krägeloh, C.U., Alyami, M.M., Medvedev, O.N. (2023). AI in Questionnaire Creation: Guidelines Illustrated in AI Acceptability Instrument Development. In: Krägeloh, C.U., Alyami, M., Medvedev, O.N. (eds) International Handbook of Behavioral Health Assessment. Springer, Cham. https://doi.org/10.1007/978-3-030-89738-3_62-1
Download citation
DOI: https://doi.org/10.1007/978-3-030-89738-3_62-1
Received:
Accepted:
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-89738-3
Online ISBN: 978-3-030-89738-3
eBook Packages: Springer Reference Behavioral Science and PsychologyReference Module Humanities and Social SciencesReference Module Business, Economics and Social Sciences