Skip to main content
Log in

Large Language Models: A Philosophical Reckoning

Participating journal: Ethics and Information Technology

Large Language Models (LLMs), such as LaMDA and GPT-3, are often presented as breakthroughs in AI because of their ability to generate natural language in text in a matter of seconds. Some suggest that the use of LLMs can improve the way we search for information, compose creative writing, and process and comprehend text. The rapid commercial rollout of dialogue-style interfaces underpinned by LLMs, most notably ChatGPT and BARD, have spurred the public imagination and engendered accompanying concerns. The emergent concerns about LLMs mirror their complex sociotechnical nature and span across multiple dimensions, including the technical (e.g., the quality and bias of the output produced by ChatGPT-like technologies, the algorithmic limitations of LLMs, their opaqueness), the social (e.g., the power relations between users and big tech companies producing LLMs, the environmental impact of their development and use), and the cultural (e.g., displacing the opportunities for critical reasoning, the educational challenges, changes to social norms and values). This Special Issue aims to understand what is ethically at stake in the use of LLMs and how to develop and introduce LLM-based applications in a responsible way. We invite the submission of papers focusing on but not restricted to the following areas:

- critical examination of LLM-related case studies

- transformative effects of LLMs on individuals and societies

- the sociotechnical systems approach to LLMs

- responsible design of LLMs

- informed use and contestation of LLMs

- governance and institutional embedding of LLMs

- LLMs and value change

- power dimensions of LLMs

- sustainability and LLMs

- cultural diversity and LLMs

- LLMs and intellectual property rights

- working conditions of LLM content moderators and other microworkers

- LLMs and privacy

- LLMs and the hallucination of misinformation

- dual use potential of LLMs for persuasion, propaganda, and epistemic disorientation

- LLMs as assistive tools

- expressive authenticity and LLM-generated content

- parasocial relationships with LLM-based chatbots

Participating journal

Ethics and Information Technology is a peer-reviewed journal dedicated to advancing the dialogue between moral philosophy and the field of information and communication technology...

Editors

  • Olya Kudina

    Olya Kudina

    I am an Assistant Professor in Ethics/Philosophy of Technology with a particular interest in the AI Ethics. I explore the dynamic moral world and the role technologies play in it through the prism of moral hermeneutics, pragmatism and empirical philosophy. Additionally, I help to integrate ethical reflection and cultural sensitivity as integral counterparts in the design process. I hold a PhD degree in Philosophy of Technology from the University of Twente (2019). My previous work outside academia adds diplomacy, (inter)governmental work, data protection and privacy to my skill-set.

  • Mark Alfano

    Mark Alfano

    Mark Alfano uses tools and methods from philosophy, psychology, and computer science to explore topics in social epistemology, moral psychology, and digital humanities. He studies how people become and remain virtuous, how values become integrated into people's lives, and how these virtues and values are (or fail to be) manifested in their perception, thoughts, feelings, deliberations, and actions. One of the guiding themes of his work is that normative philosophy without psychological content is empty, but scientific investigation without philosophical insight is blind.

Articles

Showing 1-9 of 9 articles
  1. ChatGPT is bullshit

    • Michael Townsen Hicks
    • James Humphries
    • Joe Slater
    Original Paper Open access 08 June 2024 Article: 38

Navigation