Skip to main content
Log in

Special Issue on "Trust in artificial intelligence"

Participating journal: Electronic Markets

Theme

Electronic markets for trading physical, as well as digital, goods offer a wide variety of services based on Artificial Intelligence technologies, as smart market services. Smart market services generate recommendations and predictions using Artificial Intelligence (AI) technologies on data available and accessible in electronic markets. For instance, financial high-speed trading is only feasible by smart market services that autonomously execute transactions according to market signals based on AI models trained with big data. Electronic marketplaces, including Amazon and Alibaba, are using AI technologies to provide smart services to consumers, optimize logistics, analyze consumer behavior, and derive innovative product and service designs. Some business leaders even consider there to be major threats to society from sophisticated AI solutions, while using AI extensively for their own business. Because AI systems elude human understanding and scrutinization, trust in AI is crucial for the success of smart market services, as well as other AI or machine learning-based systems . Gaining trust in AI begins with transparency in the reviews of (a) data so that biases and gaps in knowledge of a domain are controlled, (b) AI models and objective functions, (c) model performance and (d) results generated by AI models for decision making. Trust becomes an important factor for overcoming uncertainty on AI-based recommendations in general and in electronic markets in particular.

The quality of smart market services depends on shared understanding and conceptual models of data used for training AI models; data quality; the selection and training of appropriate models; and the embedding of models into smart market services. Providers of smart market services are required to build trust relationships with business and end customers based on limited possibilities for opening the “black boxes” of Artificial Intelligence systems due to increased complexity of machine learning models. Empirical studies on trust in AI indicate heterogenous results. Companies and end-users appreciate benefits and opportunities provided by smart market services. At the same time, concerns are raised with respect to privacy issues and biases of data, models and algorithms. Overly optimistic customers might become disappointed if smart market services do not deliver as expected. Proof of privacy leaks and biases might reinforce prejudices. Both may lead to decrease of trust in AI. Challenging research questions are to identify which methods, indicators and experiences have increasing effects on trust in AI. For instance, explainable AI is a technical means for opening “black boxes” of AI systems, generally, and smart market services, specifically.

This special issue seeks contributions on trust in Artificial Intelligence in the context of electronic markets. Contributions that help to understand challenges from an economic, legal or technical perspective are invited.

Central issues and topics

Possible topics of submissions include, but are not limited to:

Trust behavior and AI

Mental models, conceptual models and AI models

Psychological and sociological factors for trust in AI

Human-centric design of smart market services

Explainable AI for smart market services

Threats for trust in AI

Frameworks for smart markets

Business and legal aspects influencing trust in AI

Relationships between trust and Business models with smart market services

Transparency of data, AI models and recommendations

Case studies on building trust in AI

Keywords

Trust, Interpretability, Mental Models, Conceptual Models, Explainable AI, Smart Market Services, Privacy, Fairness of Artificial Intelligence, Biases, Transparency

Submission guidelines

Electronic Markets is a Social Science Citation Index (SSCI)-listed journal (IF 4.765 in 2020) in the area of information systems. We encourage original contributions with a broad range of methodological approaches, including conceptual, qualitative and quantitative research. Please also consider position papers and case studies for this special issue. All papers should fit the journal scope (for more information, see www.electronicmarkets.org/about-em/scope/) and will undergo a double-blind peer-review process. Submissions must be made via the journal’s submission system and comply with the journal's formatting standards. The preferred average article length is approximately 8,000 words, excluding references. If you would like to discuss any aspect of this special issue, you may either contact the guest editors or the Editorial Office.

References

Domingos, P. (2012). A few useful things to know about machine learning. Communications of the ACM, 55(10), 78-87. https://doi.org/10.1145/2347736.2347755.

Dwivedi, Y. K. et al. (2019). Artificial intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 57, 101994, https://doi.org/10.1016/j.ijinfomgt.2019.08.002.https://www.sciencedirect.com/science/journal/02684012/57/supp/C

Jacovi, A., Marasovi, A., Miller, T., & Goldberg, Y. (2021). Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in ai. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 624-635. https://doi.org/10.1145/3442188.3445923.

Luo, X., Tong, S., Fang, Z., & Qu, Z. (2019). Frontiers: Machines vs. humans: The impact of artificial intelligence chatbot disclosure on customer purchases. Marketing Science, 38(6), 937-947. https://doi.org/10.1287/mksc.2019.1192.

Maass, W., Parsons, J., Purao, S., Storey, V. C., & Woo, C. (2018). Data-driven meets theory-driven research in the era of big data: opportunities and challenges for information systems research. Journal of the Association for Information Systems, 19(12), 1. https://doi.org/10.17705/1jais.00526.

Maass, W., Parsons, J., Purao, S., & Storey, V. C. (2021). Pairing conceptual modeling with machine learning. Data & Knowledge Engineering, forthcoming.

Maass, W., Storey, V. C., & Lukyanenko, R. (2021). From mental models to machine learning models via conceptual models. In Exploring Modeling Methods for Systems Analysis and Development (EMMSAD 2021), Melbourne, Australia, pp. 1–8. https://doi.org/10.1007/978-3-030-79186-5_19.

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135-1144. dx.doi.org/10.1145/2939672.2939778.

Siau, K., & Wang, W. (2018). Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal, 31(2), 47-53.

Thiebes, S., Lins, S., & Sunyaev, A. Trustworthy artificial intelligence. Electronic Markets, 31(2021)2. https://doi.org/10.1007/s12525-020-00441-4.

Participating journal

Electronic Markets focuses on social, economic, and technological aspects of digital platforms and electronic business.

Editors

  • Wolfgang Maass

    Saarland University and German Research Center for Artificial Intelligence (DFKI), Germany wolfgang.maass@dfki.de
  • Roman Lukyanenko

    HEC Montréal, Canada roman.lukyanenko@hec.ca
  • Veda C. Storey

    Georgia State University, USA vstorey@gsu.edu

Articles

Showing 1-4 of 4 articles

Navigation