The rapid development and application of artificial intelligence (AI) in various fields have sparked a myriad of opportunities and challenges for researchers and authors. One such AI tool, ChatGPT, has demonstrated potential in assisting with mindfulness research and dissemination. In light of a recent statement regarding style guidelines by the American Psychological Association (APA) (McAdoo, 2023), this editorial aims to clarify the role of AI and ChatGPT in mindfulness research and publishing, and to provide some guidelines for authors considering the use of such tools.

Using AI and ChatGPT in Mindfulness Research

AI tools like ChatGPT appear particularly useful for improving the overall quality of writing. These models can be employed to detect grammatical, syntactical, and formatting errors, ensuring that manuscripts meet the high standards of academic journals. By providing almost instant feedback, AI-powered tools can help authors to convey their ideas more clearly and effectively for a global audience. This will be particularly useful for authors with English as a second or subsequent language who may struggle with issues such as sentence structure, nuanced phrasing, or the use of the definitive article. ChatGPT, therefore, has the potential to create a more level playing field in which non-native English speakers are less likely to be disadvantaged by their grammar and writing abilities, ensuring that reviewers’ attention remains focused on the researchers’ ideas and contributions rather than being impeded by communication barriers.

In addition to communication, AI has the potential to result in increased efficiency and quality of data analysis. One such area can be refining and correcting data analysis scripts, such as those written in the R programming language. AI-powered tools can analyze R scripts to identify errors, inconsistencies, or inefficiencies in programming code, allowing researchers to optimize their data analyses (Hassani & Silva, 2023). These tools can thus help to ensure that the research generates reliable findings. For example, a researcher may have inadvertently introduced an error in their R script, causing a faulty analysis of their data. An AI tool could identify the mistake and suggest a correction, such as modifying a function or adjusting the script’s syntax, ultimately resulting in a more accurate and reliable interpretation of the data. In many circumstances, therefore, ChatGPT prompts can be considered functionally equivalent to programming code thereby requiring a comparable degree of transparency as it is applied when sharing data and programming script.

Potential Limitations of AI Tools Such as ChatGPT

AI large language models such as ChatGPT, despite their advanced language capabilities, have several limitations that researchers should be aware of when using them in mindfulness research. These limitations include lack of domain-specific expertise, potential bias, inability to conduct new research, language and context understanding issues, and ethical concerns. While ChatGPT is a powerful language model, it is not specifically trained in the field of mindfulness. Researchers should be cautious about relying on it for accurate and up-to-date information on mindfulness theories, practices, and research methods. ChatGPT learns from a diverse range of sources, which may include biased or inaccurate information. Researchers should be aware of these biases and ensure they critically evaluate any information provided by ChatGPT before incorporating it into their work. ChatGPT cannot carry out new research or provide original insights on mindfulness. It can only generate text based on its pre-existing knowledge, which may be outdated or insufficient for the specific research question at hand. ChatGPT may struggle to understand complex or nuanced language and may misinterpret or oversimplify information. Researchers should thus be cautious when using ChatGPT for tasks that require a deep understanding of the subject matter or nuanced interpretations of mindfulness concepts. While ChatGPT can be a valuable tool for generating ideas or providing support in mindfulness research, researchers should be aware of its limitations and use it cautiously, always critically evaluating its output and supplementing it with their own expertise and judgment.

Guidelines for Using ChatGPT in Mindfulness Research

When incorporating AI-generated text into manuscripts submitted to Mindfulness, authors are advised to adhere to the following guidelines based on not only those by APA (McAdoo, 2023) but also those expressed in publications such as Nature (2023) and Flanagin et al. (2023):

  1. 1.

    ChatGPT cannot be listed as an author.

  2. 2.

    The use of ChatGPT needs to be outlined in the Method section or other relevant sections of the manuscript such as Declarations. Clearly describe the role of AI in the research process, including its limitations and potential biases.

  3. 3.

    For transparency, provide the prompt(s) used, which may be stated in the Method section (if brief) or uploaded as Supplementary Material.

  4. 4.

    Proofread text generated by ChatGPT and double-check the accuracy of any factual statements with independent and reliable sources.

  5. 5.

    When using ChatGPT in your work or research, properly acknowledge the creators of the algorithms by including a citation in both the reference list and within the main text. Cite the software as follows: OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat

Given the rapid pace of AI development, these early guidelines will no doubt be out of date in the near future. When submitting a manuscript, it is advisable to specify which guidelines authors have followed and cite those. As AI tools become more sophisticated, our reliance on them will most likely increase. Yet, a common phenomenon that has been observed is the occurrence of so-called artificial hallucinations (Alkaissi & McFarlane, 2023), which is when AI-generated answers are factually incorrect. Since such answers are generally presented with (what may be described as) the same of a sense of confidence as correct answers, users of the AI tool may not perceive the need to double-check those for accuracy. Another issue that is currently fiercely debated is plagiarism. We cannot offer any predictions on the likelihood that AI-generated text has the potential to result in plagiarized text. What we will continue to implement, however, are routine similarity checks through the publisher’s submission platform. Lastly, while the environmental impact of ChatGPT is still being worked out (An et al., 2023), authors are encouraged to use the tool mindfully, thus carefully crafting prompts in order to minimize the number of queries that need to be run to achieve the intended result. Our requirement in the guidelines to provide the prompts is intended to encourage such carefully planned use of the technology.

In conclusion, AI and large language models like ChatGPT offer significant potential to improve the quality of writing and data analyses in mindfulness research. Understanding the proper use and citation of AI-generated text as well as being mindful of the limitations and ethical considerations is crucial in order for authors to be able to harness the potential of these tools and contribute meaningfully to the advancement of knowledge in the field of mindfulness research. The guidelines presented here are intended to ensure ethical and sustainable use of the technology.