Collection

Normative Theory and Artificial Intelligence

As humanity comes to rely on ever more sophisticated forms of artificial intelligence (AI), we face pressing normative questions about what to do with AI technology. But designing socially acceptable AI systems isn’t just a matter of taking our best existing normative theories and putting them into practice. AI systems have a quite different set of capacities and constraints than those we face as humans. As we employ expert knowledge and experimentation in advancing AI, we have to revise our existing normative theories, and confront challenging questions that wouldn’t have occurred to us in the absence of this exploration. This special issue explores how thinking about AI invites us to make first-order progress in normative theory—across normative ethics, metaethics, social epistemology, action theory and beyond.

Online SUBMISSION through invitation only: Please use the journal’s Online Manuscript Submission System Editorial Manager®. Do note that paper submissions via email are not accepted.

Author Submission’s GUIDELINES: Authors are asked to prepare their manuscripts according to the journal’s standard Submission Guidelines.

EDITORIAL PROCESS:

• When uploading your paper in Editorial Manager, please select “SI: Normative Theory and Artificial Intelligence” in the drop-down menu “Article Type”.

• Papers do not ordinarily exceed 10,000 words.

• All papers will undergo the journal’s standard review procedure (double-blind peer-review), according to the journal’s Peer Review Policy, Process and Guidance.

• Reviewers will be selected according to the Peer-Reviewer Selection policies.

Editors

  • Seth Lazar, Australian National University, Australia

    Seth Lazar is Professor of Philosophy at the Australian National University, and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI. He runs the Machine Intelligence and Normative Theory (MINT) Lab, where he leads research on the moral and political philosophy of AI. He has chaired the ACM Fairness, Accountability, and Transparency conference, and the ACM/AAAI AI, Ethics and Society conference.He has given the 2022 Mala and Solomon Kamm lecture in Ethics at Harvard University, and the 2023 Tanner Lecture on AI and Human Values at Stanford University. Seth.lazar@anu.edu.au

  • Claire Benn, Australian National University, Australia

    Claire Benn is a research fellow on the Humanising Machine Intelligence Grand Challenge project at ANU. Her research addresses a wide variety of topics within AI ethics broadly construed, including programming ethical behaviour, human-machine interaction, and the role of communication and appearance in AI design.Her approach to AI ethics is two-fold: to use the questions, puzzles and problems emerging technology raises to make foundational contributions to normative theory; and to reflect this philosophical progress back to make concrete suggestions for the design and deployment of more ethical technological systems. Claire.benn@anu.edu.au

  • Todd Karhu

    King’s College London, England Todd Karhu is a lecturer in legal and political philosophy at King’s College London, where he is based in the Yeoh Tiong Lay Centre for Politics, Philosophy & Law. He is interested in those principles that govern the ethical development and use of autonomous systems, and in how the law should assign accountability and liability when these systems cause harm. todd.karhu@kcl.ac.uk

  • Pamela Robinson

    Australian National University, Australia Pamela Robinson is a research fellow in the School of Philosophy at the Australian National University. She works on moral philosophy, epistemology, and the ethics and safety of artificial intelligence. She is a member of the Machine Intelligence and Normative Theory lab and the Humanising Machine Intelligence Grand Challenge project. pamela.robinson@anu.edu.au

Articles (10 in this collection)

  1. Autonomised harming

    Authors

    • Linda Eggert
    • Content type: OriginalPaper
    • Open Access
    • Published: 12 July 2023