Avoid common mistakes on your manuscript.
Dear editor, we read with interest the letter by Som et al. “Role of Chat GPT in Public Health” which briefly resembled the possible role of Chat GPT to address inquiries about health promotion and disease prevention techniques [1]. The ChatGPT, an acronym for Chat-based Generative Pre-trained Transformer, is a highly advanced AI language model developed by OpenAI, based in San Francisco, California, USA [2]. Launched in November 2022, it quickly gained popularity and became one of the fastest-growing web applications in history. ChatGPT has been trained on a broad range of data sources, it has been designed to comprehend and produce natural-sounding text replies in a dialogue. According to the author of “Role of Chat GPT in Public Health”, the AI language is capable of offer insights on how to promote healthy lifestyle practices (e.g. engaging in regular physical activity, maintaining a nutritious diet, and refraining from harmful substances like tobacco and excessive alcohol consumption) provide information on the significance of vaccination, supply information about the importance of periodic screening tests (e.g. mammograms and colon cancer screenings) [1]. The following limitations have also been highlighted: limited accuracy; bias and limitations of data; lack of context; limited engagement; no direct interaction with health professionals. What is lacking in the letter by Biswas et al. but is important to highlight regarding the use of this instrument in a medical context is that ChatGPT has a tendency to fabricate references. This has been preliminary reported [3, 4], and we are currently studying the propensity of fake references generation in an original prospective research. The reasons for this bias are still unclear but need to be examined and fixed at the earliest. Figures 1–2 show ChatGPT generation of fabricated references (from inexact authors, years and titles to DOI codes without correspondence), and the language’s justification for the mistake. In scientific writing and evidence-based medicine, references play a crucial role in providing a basis for claims and recommendations. Specifically, references serve the following purposes:
-
1.
Supporting claims: References allow authors to support their claims and recommendations with data from existing research. By citing relevant studies and sources, authors can demonstrate that their recommendations are based on a thorough review of the available evidence.
-
2.
Evaluating evidence: References enable readers to evaluate the quality and reliability of the evidence used to support claims and recommendations. By providing information about the source, design, and results of studies, references help readers assess the validity and applicability of the article findings.
-
3.
Establishing context: References help authors and readers place their work within the context of existing research. By citing relevant studies and sources, authors can show how their work contributes to the ongoing conversation in the field and build upon the work of others.
-
4.
Identifying knowledge gaps: References can help authors identify gaps in the existing literature and suggest areas for future research. By highlighting areas where evidence is lacking or conflicting, authors can generate new research questions and guide the development of future studies.
Overall, references are a critical component of scientific writing and evidence-based medicine, as they enable authors and readers to evaluate the validity and reliability of claims and recommendations, and guide the development of future research [5]. The tendency by ChatGPT 3.5 and previous of generating non existing resource is an enormous risk of bias that need to be assessed and avoided promptly, in the interest of scientific community and public health.
References
Biswas, S. S. Role of chat GPT in public health. Ann. Biomed. Eng. 51(5):868–869, 2023. https://doi.org/10.1007/s10439-023-03172-7.
Gilson, A., C. W. Safranek, T. Huang, V. Socrates, L. Chi, R. A. Taylor, and D. Chartash. How does ChatGPT perform on the united states medical licensing examination? The implications of large language models for medical education and knowledge assessment. JMIR Med. Educ. 9:45312, 2023. https://doi.org/10.2196/45312.
Hill-Yardin, E. L., M. R. Hutchinson, R. Laycock, and S. J. Spencer. A Chat(GPT) about the future of scientific publishing. Brain Behav. Immun. 110:152–154, 2023. https://doi.org/10.1016/j.bbi.2023.02.022.
Manohar, N., and S. S. Prasad. Use of ChatGPT in academic publishing: a rare case of seronegative systemic lupus erythematosus in a patient with HIV infection. Cureus. 15(2):34616, 2023. https://doi.org/10.7759/cureus.34616.
Bahadoran, Z., P. Mirmiran, K. Kashfi, and A. Ghasemi. The principles of biomedical scientific writing: citation. Int. J. Endocrinol. Metab. 18(2):e102622, 2020. https://doi.org/10.5812/ijem.102622.
Funding
This research received no external funding.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Associate Editor Stefan M. Duma oversaw the review of this article.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Frosolini, A., Gennaro, P., Cascino, F. et al. In Reference to “Role of Chat GPT in Public Health”, to Highlight the AI’s Incorrect Reference Generation. Ann Biomed Eng 51, 2120–2122 (2023). https://doi.org/10.1007/s10439-023-03248-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10439-023-03248-4