Skip to main content

AI in Questionnaire Creation: Guidelines Illustrated in AI Acceptability Instrument Development

  • Living reference work entry
  • First Online:
International Handbook of Behavioral Health Assessment

Abstract

As rapid advancements in artificial intelligence (AI) continue to reshape various professional sectors, researchers are beginning to explore the potential of AI tools, such as OpenAI’s ChatGPT. This chapter delves into the utilization of AI for questionnaire item generation, underlining the importance of systematic and transparent approaches in line with preliminary guidelines offered by academic journals. We present a detailed case study on creating the AI Acceptability Instrument, which is designed to assess people’s acceptability of AI technology. Our approach demonstrates the use of AI as an assistive tool rather than a decision-maker, upholding the essential principles of transparency and theoretical grounding in the process. Guided by current journal guidelines and additional advice, we highlight how incorporating AI technologies such as ChatGPT can elevate the process of questionnaire item development. This chapter aims to provide researchers with a robust, systematic, and theory-driven framework for employing AI tools in research, thereby fostering a balanced utilization of AI in the field. We emphasize that, while AI holds transformative potential, its application must be tempered with critical awareness of its capabilities and limitations to ensure ethical and methodologically sound usage.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christian U. Krägeloh .

Editor information

Editors and Affiliations

Appendix

Appendix

  • English version of the AI Acceptability Instrument (AIAI)

    1. 1.

      The development of AI is blasphemous.

    2. 2.

      I believe that AI can contribute positively to the creative industries, such as art, music, and writing.

    3. 3.

      Widespread use of AI would take away jobs from people.

    4. 4.

      I don’t know why, but AI scares me.

    5. 5.

      Widespread use of AI would mean that it would be costly for us to maintain it.

    6. 6.

      I feel that AI has the potential to enhance our quality of life by automating mundane tasks.

    7. 7.

      I am concerned that AI might perpetuate existing inequalities by primarily benefiting the wealthy and powerful.

    8. 8.

      Something bad might happen if AI developed human-like characteristics.

    9. 9.

      I am worried that AI might lead to an erosion of human skills and knowledge.

    10. 10.

      I feel overwhelmed and stressed when coping with the uncertainty and ambiguity of AI development and innovation.

    11. 11.

      I believe that AI can significantly improve the efficiency of various industries.

    12. 12.

      Widespread use of AI in society will make it less warm.

    13. 13.

      AI can make our lives easier.

    14. 14.

      I am worried that AI will make us lazier.

    15. 15.

      I feel confident in my skills and abilities to compete with AI in the future.

    16. 16.

      Technologies needed for the development of AI belong to scientific fields that humans should not study.

    17. 17.

      People interacting with AI could sometimes lead to problems in relationships between people.

    18. 18.

      I would hate the idea of AI making judgments about things.

    19. 19.

      I can trust persons and organizations related to the development of AI.

    20. 20.

      I would feel relaxed talking with AI.

    21. 21.

      I believe that AI is crucial to solve complex global problems, such as climate change and disease outbreaks.

    22. 22.

      I believe that AI can help us better understand human behavior and emotions.

    23. 23.

      I have a hopeful outlook on the future impact of AI on society.

    24. 24.

      I believe that AI can help us better understand and manage complex systems, such as ecosystems and economies.

    25. 25.

      I would feel uneasy if AI really had emotions or independent thoughts.

    26. 26.

      I regularly use AI capabilities in my daily life or work.

    27. 27.

      I am concerned that AI would be a bad influence on children.

    28. 28.

      I trust the information I receive about AI from media.

    29. 29.

      I am concerned that AI will harm humanity and society.

    30. 30.

      I would feel very nervous just interacting with AI.

    31. 31.

      Persons and organizations related to the development of AI will consider the needs, thoughts, and feelings of their users.

    32. 32.

      I feel that AI should be strictly regulated to ensure ethical and responsible development.

    33. 33.

      I trust persons and organizations related to the development of AI to disclose sufficient information to the public, including negative information.

    34. 34.

      I am afraid that AI will make us forget what it is like to be human.

    35. 35.

      If AI causes accidents or trouble, persons and organizations related to its development should give sufficient compensation to the victims.

    36. 36.

      I am worried that AI will become too powerful and unpredictable for humans to control and govern.

    37. 37.

      I would feel nervous operating AI in front of other people.

    38. 38.

      AI should perform repetitive and boring routine tasks instead of leaving them to people.

    39. 39.

      I feel that if we become over-dependent on AI, something bad might happen.

    40. 40.

      AI should perform dangerous tasks, for example in disaster areas, deep sea, and space.

    41. 41.

      I evaluate the sources and credibility of information about AI critically and verify them.

    42. 42.

      I feel that in the future, society will be dominated by AI.

    43. 43.

      I think AI can enhance our decision-making processes by providing unbiased and data-driven insights.

    44. 44.

      I am afraid that AI will encourage less interaction between humans.

    45. 45.

      Something bad might happen if AI developed into living beings.

    46. 46.

      I am comfortable using AI tools or applications in my work or personal life.

    47. 47.

      I am worried that AI could lead to a loss of privacy for individuals.

    48. 48.

      I am concerned that AI may be used for malicious purposes, such as cyber-attacks or surveillance.

    49. 49.

      AI is a natural product of our civilization.

    50. 50.

      I feel that in the future, society will be dominated by AI.

    51. 51.

      I feel that if I depend on AI too much, something bad might happen.

    52. 52.

      I am concerned that AI might contribute to the widening of the digital divide.

    53. 53.

      I believe that humans will find effective ways to regulate AI development and deployment.

    54. 54.

      AI can be very useful for teaching young kids.

    55. 55.

      I resist AI as a negative force in my life and decision-making.

    56. 56.

      If AI had emotions, I would be able to make friends with it.

    57. 57.

      I trust AI to make fair and transparent decisions that affect me or others.

    58. 58.

      I am concerned that AI might be used to manipulate public opinion or spread misinformation.

    59. 59.

      I am worried that AI may reinforce existing biases and stereotypes in society.

    60. 60.

      I often worry about losing my job or career prospects to AI.

    61. 61.

      I am concerned that the benefits of AI may not be equitably distributed across society.

    62. 62.

      I am worried that AI might lead to a loss of control over our own data and personal information.

    63. 63.

      I consider AI too complex and technical for my needs.

    64. 64.

      AI can be very useful for caring for the elderly and disabled.

    65. 65.

      The development of AI is a blasphemy against nature.

    66. 66.

      I treat AI agents or systems as tools or partners in my interactions with them.

    67. 67.

      I would feel uneasy if AI really had emotions.

    68. 68.

      I embrace AI as a positive force in my life and decision-making.

    69. 69.

      I am worried that AI could increase social isolation by replacing human-to-human interactions.

    70. 70.

      I would feel paranoid talking with AI.

    71. 71.

      I feel confident and curious when coping with the uncertainty and ambiguity of AI development and innovation.

    72. 72.

      I would feel uneasy if I was given a job where I had to use AI.

    73. 73.

      I would hate the idea that AI was making judgments about things.

    74. 74.

      I am familiar with different AI capabilities such as natural-language generation, computer vision, and robotic process automation.

    75. 75.

      I like the idea that AI can augment and enhance human intelligence and creativity.

    76. 76.

      I think AI can be biased and opaque when making decisions that affect me or others.

    77. 77.

      I think AI has the potential to replace and surpass human intelligence and creativity.

    78. 78.

      I am interested in learning more about AI and its applications for my personal and professional development.

    79. 79.

      I feel comforted being with AI that has emotions.

    80. 80.

      I believe that AI can help reduce social inequalities by providing better access to education and healthcare.

    81. 81.

      I believe that AI can help empower individuals by providing them with personalized services and support.

    82. 82.

      AI can create new forms of interactions both between humans and between humans and machines.

    83. 83.

      Persons and organizations related to the development of AI are well-meaning.

    84. 84.

      I am concerned that AI would be a bad influence on children.

    85. 85.

      I feel that AI should be designed to prioritize human values, well-being, and dignity.

    86. 86.

      I believe that AI can help us achieve a more sustainable and eco-friendly future.

    87. 87.

      I treat AI agents or systems as rivals or enemies in my interactions with them.

    88. 88.

      The term “AI” means nothing to me.

    89. 89.

      I am aware of the dangers and risks associated with AI, such as job losses, social manipulation, surveillance, biases, inequality, ethics, and autonomous weapons.

    90. 90.

      I feel that AI should be developed in a transparent and collaborative manner, involving input from diverse stakeholders.

    91. 91.

      I am concerned that AI might exacerbate existing power imbalances between countries and corporations.

    92. 92.

      I have a fearful outlook on the future impact of AI on society.

    93. 93.

      I am concerned about the ethical and governance aspects of AI in health and medicine.

    94. 94.

      I believe AI has the potential to improve the quality and accessibility of healthcare services.

    95. 95.

      I don’t know why, but I like the idea of AI.

    96. 96.

      I have concerns about the reliability, security, or privacy of AI tools or applications.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Nature Switzerland AG

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Krägeloh, C.U., Alyami, M.M., Medvedev, O.N. (2023). AI in Questionnaire Creation: Guidelines Illustrated in AI Acceptability Instrument Development. In: Krägeloh, C.U., Alyami, M., Medvedev, O.N. (eds) International Handbook of Behavioral Health Assessment. Springer, Cham. https://doi.org/10.1007/978-3-030-89738-3_62-1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-89738-3_62-1

  • Received:

  • Accepted:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-89738-3

  • Online ISBN: 978-3-030-89738-3

  • eBook Packages: Springer Reference Behavioral Science and PsychologyReference Module Humanities and Social SciencesReference Module Business, Economics and Social Sciences

Publish with us

Policies and ethics