Dear editor, we read with interest the letter by Som et al. “Role of Chat GPT in Public Health” which briefly resembled the possible role of Chat GPT to address inquiries about health promotion and disease prevention techniques [1]. The ChatGPT, an acronym for Chat-based Generative Pre-trained Transformer, is a highly advanced AI language model developed by OpenAI, based in San Francisco, California, USA [2]. Launched in November 2022, it quickly gained popularity and became one of the fastest-growing web applications in history. ChatGPT has been trained on a broad range of data sources, it has been designed to comprehend and produce natural-sounding text replies in a dialogue. According to the author of “Role of Chat GPT in Public Health”, the AI language is capable of offer insights on how to promote healthy lifestyle practices (e.g. engaging in regular physical activity, maintaining a nutritious diet, and refraining from harmful substances like tobacco and excessive alcohol consumption) provide information on the significance of vaccination, supply information about the importance of periodic screening tests (e.g. mammograms and colon cancer screenings) [1]. The following limitations have also been highlighted: limited accuracy; bias and limitations of data; lack of context; limited engagement; no direct interaction with health professionals. What is lacking in the letter by Biswas et al. but is important to highlight regarding the use of this instrument in a medical context is that ChatGPT has a tendency to fabricate references. This has been preliminary reported [3, 4], and we are currently studying the propensity of fake references generation in an original prospective research. The reasons for this bias are still unclear but need to be examined and fixed at the earliest. Figures 12 show ChatGPT generation of fabricated references (from inexact authors, years and titles to DOI codes without correspondence), and the language’s justification for the mistake. In scientific writing and evidence-based medicine, references play a crucial role in providing a basis for claims and recommendations. Specifically, references serve the following purposes:

  1. 1.

    Supporting claims: References allow authors to support their claims and recommendations with data from existing research. By citing relevant studies and sources, authors can demonstrate that their recommendations are based on a thorough review of the available evidence.

  2. 2.

    Evaluating evidence: References enable readers to evaluate the quality and reliability of the evidence used to support claims and recommendations. By providing information about the source, design, and results of studies, references help readers assess the validity and applicability of the article findings.

  3. 3.

    Establishing context: References help authors and readers place their work within the context of existing research. By citing relevant studies and sources, authors can show how their work contributes to the ongoing conversation in the field and build upon the work of others.

  4. 4.

    Identifying knowledge gaps: References can help authors identify gaps in the existing literature and suggest areas for future research. By highlighting areas where evidence is lacking or conflicting, authors can generate new research questions and guide the development of future studies.

Fig. 1
figure 1

All of the three different references provided by ChatGPT 3.5 regarding the role of music therapy in reducing dental anxiety were fake. Nonetheless, at direct questioning, the chat denies any intention to fabricate references

Fig. 2
figure 2

All of the three different references provided by ChatGPT 3.5 regarding the role of music therapy in reducing dental anxiety were fake. Nonetheless, at direct questioning, the chat denies any intention to fabricate references

Overall, references are a critical component of scientific writing and evidence-based medicine, as they enable authors and readers to evaluate the validity and reliability of claims and recommendations, and guide the development of future research [5]. The tendency by ChatGPT 3.5 and previous of generating non existing resource is an enormous risk of bias that need to be assessed and avoided promptly, in the interest of scientific community and public health.