Introduction

The world is different now, and it is exponentially changing. Research, science, and businesses are transforming at an unprecedented pace, much of which is due to artificial intelligence [1]. This is the reality: AI has now seeped into various veins of society–influencing every aspect of our lives. There are many profiles of people in this landscape—those who negate AI, those who embrace the hype, and others who find themselves somewhere in the middle. However, there is a new trend emerging. Despite the numerous benefits AI offers, many people despise its use as if it is something lazy and sinful.

To understand this phenomenon, let us take a step back and consider the broader context of technological adoption and societal reaction. Throughout history, every major technological advancement has faced resistance and skepticism. When the printing press was invented, it was met with fear that it would lead to the dissemination of misinformation and the decay of traditional knowledge [2]. Similarly, the advent of electricity was initially seen as dangerous and unnecessary [3]. Each of these innovations eventually proved their worth, but the path to acceptance was fraught with anxiety and opposition.

AI shaming, the practice of criticizing or demeaning the use of AI, can be seen as a modern iteration of this historical pattern. It reflects the deep-seated anxieties and moral dilemmas that arise whenever a powerful new technology challenges established norms and values [4]. On one end of the spectrum, we have those who view AI with almost messianic optimism, heralding it as the solution to all human problems. On the other end, we have the skeptics who fear that AI will lead to a dystopian future where human agency and authenticity are eroded.

In the middle ground, there are individuals and organizations grappling with the ethical implications and practical applications of AI. They recognize the potential for AI to drive progress and innovation but are also mindful of the risks and responsibilities that come with it. This balanced perspective is crucial for navigating the complex landscape of AI integration.

One analogy that can help illustrate the phenomenon of AI shaming is the story of Prometheus from Greek mythology. Prometheus, a Titan, defied the gods by stealing fire and giving it to humanity. Fire, a powerful and transformative tool, enabled humans to advance in countless ways. However, Prometheus was punished severely for his transgression, as the gods feared the potential misuse of such power [5]. AI, like fire, is a tool of immense potential. It can ignite progress and innovation, but it also carries the risk of unintended consequences and ethical quandaries. Those who engage in AI shaming are, in a sense, echoing the cautionary voices of the gods, warning against the unfettered embrace of a transformative force.

AI shaming is a relatively novel concept that is just beginning to gain attention, despite its significant impact on the discourse surrounding technology. It is the proverbial elephant in the room—an undercurrent of criticism that shapes how AI is perceived and utilized, yet it is seldom addressed directly. To date, there is a notable gap in scholarly literature explicitly examining AI shaming, which leaves a critical aspect of AI integration unexplored. This gap is significant because understanding the social and psychological dynamics of AI shaming is crucial for developing balanced perspectives and responsible AI policies. This issue particularly affects academic writers and researchers, including those in fields such as public health, biomedical engineering and bioengineering, where AI’s potential for innovation is immense yet often stymied by stigma.

What is AI Shaming and are Its Characteristics?

AI shaming is the practice of criticizing or looking down on individuals or organizations for using artificial intelligence to generate content or perform tasks. This phenomenon often involves dismissing the validity or authenticity of AI-assisted work, suggesting that using AI is deceitful, lazy, or less valuable than human-only efforts [6]. AI shaming can manifest as accusations, skepticism, or outright disdain for AI usage, often driven by misconceptions or fear of new technology.

AI shaming exhibits several characteristics that reflect underlying misconceptions and biases against the use of artificial intelligence. These characteristics include the following:

  1. 1.

    Dismissal of Authenticity: Critics often argue that AI-generated content lacks authenticity or creativity, implying that human-generated content is inherently superior. For example, a medical anthropologist submits an article about tuberculosis to a magazine and mentions he used an AI tool to help with brainstorming ideas. The editor dismisses the article, saying, “We want genuine, human creativity, not something generated by a machine.”

  2. 2.

    Moral Judgment: There is a tendency to frame the use of AI as inherently deceitful or unethical, suggesting that relying on AI tools is a form of cheating or cutting corners. For example, a non-native English researcher in the field of bioengineering uses an AI tool to help refine her report and acknowledges it in the manuscript. A colleague finds out and accuses them of cheating, arguing that it’s morally wrong to use AI for professional work.

  3. 3.

    Ad Hominem Attacks: AI shaming can involve personal attacks on individuals who use AI, questioning their skills, integrity, or commitment to their work. Say, an occupational health and safety specialist presents a report with the aid of AI during a team meeting. A colleague responds, “I guess you didn’t put much effort into this if you needed a robot to do your job for you.”

  4. 4.

    Resistance to Change: AI shaming often stems from a reluctance to embrace new technologies, with critics elevating traditional methods and dismissing technological advancements. For instance, a graduate student proposes using AI-driven algorithms to help in literature review processes for their clinical psychology thesis. However, their thesis committee expresses skepticism, remarking, “We’ve always conducted literature reviews manually, ensuring thoroughness and critical analysis. Relying on AI could be seen as taking shortcuts and may not meet the rigorous standards expected in academic research.”

  5. 5.

    Power Dynamics: This practice can be seen as a way to maintain the status quo and preserve existing hierarchies, with individuals who are less familiar with AI presenting their lack of knowledge as a virtue. Say, in a scientific publishing house, senior editors criticize junior staff for using AI tools to speed up the editing process. This implies that relying on such tools undermines the hard-earned skills of traditional editing.

  6. 6.

    Fear of Replacement: Critics may express concerns that AI will replace human jobs or devalue human skills, leading to resistance against its adoption. For instance, a senior consultant refuses to explore AI-driven healthcare analytics tools. He argues that these technologies might automate decision-making processes traditionally handled by consultants. He actively discourages his team from adopting AI tools, fearing it could diminish the perceived value of their expertise in client interactions and strategic planning.

  7. 7.

    Gatekeeping: AI shaming can involve setting arbitrary standards for what constitutes “acceptable” methods of content creation, thereby excluding or marginalizing those who use AI. Say, in an association of science communicators, an established writer discredits a newcomer who used AI to generate some ideas for her manuscript, arguing that “true” writers never rely on such technology.

  8. 8.

    Misunderstanding of AI: There is often a lack of understanding about how AI works, leading to exaggerated fears and misconceptions about its capabilities and impact. Say, a medical researcher expresses skepticism about using AI algorithms to analyze genomic data for cancer research. He believes AI could introduce errors or oversimplify complex biological processes. As a result, he resists integrating AI tools into their research methodology, fearing it could compromise the scientific rigor and reliability of their findings. While it may be true that there are concerns and uncertainties surrounding AI in medical research, it is essential to consider its potential to enhance data analysis and accelerate scientific discoveries.

  9. 9.

    Generational Divide: Younger, more tech-savvy individuals may face criticism from older generations who are less comfortable with AI and its integration into professional workflows. For example, a young public health employee uses AI to automate routine tasks and improve productivity. Older colleagues scoff at this approach, labeling it as laziness and expressing nostalgia for “the old ways” of doing things manually.

What are the Reasons Behind AI Shaming, and What are the Profiles of Those Who Engage in it?

Individuals who engage in AI shaming can come from various backgrounds and motivations. Here are some typical profiles. By understanding these profiles, we can better address the root causes of AI shaming and work toward a more inclusive and informed approach to integrating AI into various fields.

  1. 1.

    Traditionalists:

    1. o

      Characteristics: Prefer established methods and resist change.

    2. o

      Motivations: Believe that traditional methods are superior and worry that AI undermines the value of human skills.

    3. o

      Issue/Repercussion: Resistance to adopting AI may hinder innovation and efficiency in fields where AI could enhance productivity and decision-making.

    4. o

      Example: A senior biomedical researcher insists on traditional data analysis methods, dismissing AI-powered algorithms as untested and unreliable in interpreting complex biological data.

  2. 2.

    Technophobes:

    1. o

      Characteristics: Have a fear or distrust of new technology.

    2. o

      Motivations: Concerned about the potential negative impacts of AI, such as job loss or ethical issues.

    3. o

      Issue/Repercussion: Reluctance to embrace AI may result in missed opportunities for advancements in fields like healthcare or research, where AI could improve diagnostics and outcomes.

    4. o

      Example: A clinical psychologist avoids using AI-driven diagnostic tools, fearing they might overlook nuanced human factors essential in patient assessments.

  3. 3.

    Elitists:

    1. o

      Characteristics: Believe their expertise or skill set is superior to others.

    2. o

      Motivations: Use AI shaming to maintain their status and feel threatened by new tools that can democratize access to knowledge and skills.

    3. o

      Issue/Repercussion: Exclusionary attitudes toward AI may perpetuate inequalities in access to technological advancements, limiting broader benefits across society.

    4. o

      Example: A seasoned medical researcher scoffs at younger colleagues using AI algorithms for clinical trial data analysis, asserting that human insight and experience are indispensable in interpreting medical outcomes.

  4. 4.

    Luddites:

    1. o

      Characteristics: Oppose industrialization, automation, and technological change.

    2. o

      Motivations: Fear that technology, including AI, will disrupt the social and economic order.

    3. o

      Issue/Repercussion: Resistance to AI adoption may stifle economic growth and job creation in industries where AI could enhance productivity and innovation.

    4. o

      Example: A university administrator resists implementing AI systems for student performance analytics, fearing it could reduce faculty involvement in student support and mentoring.

  5. 5.

    Misunderstanders:

    1. o

      Characteristics: Lack a proper understanding of how AI works and its benefits.

    2. o

      Motivations: Base their criticism on misconceptions, often fearing the unknown.

    3. o

      Issue/Repercussion: Misguided opposition to AI may prevent the realization of its potential benefits, such as improved efficiency and decision-making in sectors like customer service or data analysis.

    4. o

      Example: A professor of medical research rejects AI-driven literature review tools, mistakenly believing they cannot match the thoroughness and critical analysis of manual reviews.

  6. 6.

    Purists:

    1. o

      Characteristics: Value human effort and creativity as the only true form of expression or work.

    2. o

      Motivations: See AI as a shortcut that diminishes the authenticity and integrity of work.

    3. o

      Issue/Repercussion: Narrow views on creativity and work may limit exploration of AI’s capacity to augment human capabilities in creative fields like art or design.

    4. o

      Example: An academic in clinical health psychology critiques peers using help from AI in terms of patient assessment reports, arguing that genuine insights into mental health require empathetic human interaction and interpretation.

  7. 7.

    Powerholders:

    1. o

      Characteristics: Individuals in positions of power who feel threatened by the democratizing potential of AI.

    2. o

      Motivations: Fear losing control or authority if AI makes expertise more accessible.

    3. o

      Issue/Repercussion: Resistance from powerholders may impede efforts to democratize knowledge and skills through AI, hindering progress toward more inclusive and equitable practices in education and research.

    4. o

      Example: A dean of biomedical engineering resists integrating AI into the curriculum, concerned it could diminish the exclusivity of specialized knowledge taught by faculty.

  8. 8.

    Generational Skeptics:

    1. o

      Characteristics: Often older individuals who are more comfortable with familiar technologies.

    2. o

      Motivations: View newer technologies as unnecessary or inferior to the tools they grew up using.

    3. o

      Issue/Repercussion: Skepticism toward AI may slow down technological advancements and innovations in fields where AI could revolutionize processes and outcomes.

    4. o

      Example: An older professor of medical research rejects AI-driven predictive modeling for disease patterns, preferring traditional statistical methods they have used throughout their career.

What are the Effects of AI Shaming?

The effects of AI shaming in academia are multifaceted, influencing both individual researchers and the broader scholarly community. Here are some effects of AI shaming:

  1. 1.

    Inhibited Technology Adoption: Academic writers and researchers often face hesitation in adopting AI tools due to concerns about how their use might be perceived within the scholarly community. For instance, using AI for helping write literature review could raise doubts about the authenticity or rigor of their research methods. This reluctance slows down the integration of AI into academic workflows, potentially limiting opportunities to explore innovative research methodologies and data-driven insights. Consequently, there’s a risk of missing out on advancements that could enhance research efficiency and accelerate knowledge discovery.

  2. 2.

    Stifled Innovation: Fear of AI shaming can lead researchers to avoid experimenting with AI-driven approaches, fearing it may compromise the originality or integrity of their work. This approach stifles innovation in academic research, as researchers may stick to traditional or usual methods rather than exploring new AI-driven techniques for data analysis, pattern recognition, or hypothesis testing. The reluctance to embrace AI could hinder progress in fields where AI has shown potential to uncover new insights and enhance scholarly contributions.

  3. 3.

    Increased Stress and Anxiety: Academic writers and researchers using AI tools may experience heightened anxiety about the perceived authenticity or credibility of their work. Concerns about being judged for using AI can create a stressful academic environment, impacting researchers’ confidence in their research outcomes. This anxiety may lead to self-censorship or hesitation in using AI tools, potentially limiting the researcher’s ability to leverage technological advancements for more robust and data-driven research outcomes.

  4. 4.

    Widening Skill Gaps: Resistance to adopting AI in academic settings can lead to disparities in technological skills among researchers and academic staff. Researchers who are reluctant or unaware of AI’s potential may fall behind in acquiring essential skills for data analysis, machine learning, or automated research processes. This skill gap not only affects individual researchers but also hinders collaborative efforts and interdisciplinary research initiatives that could benefit from AI-driven methodologies. Addressing these skill gaps is crucial for fostering a more inclusive and innovative academic environment capable of leveraging AI’s full potential.

  5. 5.

    Reduced Collaboration: Concerns over AI shaming may deter academic teams from proposing or collaborating on projects that incorporate AI technologies. This reluctance stems from fears of negative perceptions or criticisms from peers, funding agencies, or academic reviewers. As a result, potential collaborations that could harness AI’s capabilities for interdisciplinary research or complex data analysis may be limited, depriving academia of opportunities to tackle multifaceted research questions and innovate across disciplines.

  6. 6.

    Missed Opportunities for Efficiency: Academic institutions may overlook AI tools that could streamline routine tasks such as data management, literature reviews, or manuscript preparation. This oversight can result from apprehensions about AI’s impact on academic integrity or reluctance to invest in unfamiliar technologies. Consequently, researchers may miss opportunities to enhance their productivity and efficiency, prolonging the time required to conduct studies and disseminate findings. Embracing AI in academia requires addressing these concerns and exploring how AI can complement traditional research methodologies to optimize scholarly workflows.

  7. 7.

    Perpetuation of Misconceptions: AI shaming in academic circles can perpetuate myths and misunderstandings about AI’s capabilities, ethical implications, and potential risks. Misconceptions about AI may arise from concerns over job displacement, biases in AI algorithms, or ethical dilemmas associated with AI-driven decision-making. These misconceptions can hinder informed discussions and balanced perspectives on integrating AI into academic research practices. Overcoming these challenges involves promoting education and awareness about AI’s benefits and limitations, fostering a more informed dialogue within the academic community to navigate AI’s evolving role in research and scholarship.

Why Academic Writers and Researchers Should not be Ashamed of Using AI?

AI has already made significant strides in various sectors, from businesses to fields like healthcare and engineering, demonstrating its potential to enhance efficiency and productivity. Despite these successes, academia sometimes faces AI shaming—criticism of AI use—which unjustly creates barriers for academic writers and researchers. However, there is no reason for them to feel ashamed when using AI tools responsibly and wisely. In fact, integrating AI can improve the quality and efficiency of academic work without compromising its integrity.

The resistance to AI often stems from misunderstandings about its capabilities, mirroring historical skepticism toward new technologies. Just as Socrates feared that writing would weaken memory and knowledge, some people worry that AI will diminish the intellectual rigor of academic writing and research. However, just as writing became an essential tool for preserving and expanding knowledge, AI, when used responsibly, can augment human capabilities rather than replace them.

Throughout history, technological advancements have disrupted traditional labor practices initially, but led to more efficient and productive work environments [7]. Similarly, AI can automate repetitive tasks, freeing up time for academic writers and researchers to focus on more complex and creative aspects of their work. To counter arguments against AI use, it is essential to emphasize responsibility and ethical considerations. Academic practitioners should openly declare their use of AI, fostering transparency and accountability. For instance:

“In preparing this work, I utilized ChatGPT 3.5 for outlining and proofreading. Subsequently, I carefully reviewed and edited the content to ensure accuracy and coherence. I take full responsibility for the integrity of this publication. ”

By being transparent about their AI use, academic writers and researchers can demonstrate that AI enhances rather than undermines their contributions. This approach not only dispels misconceptions about AI’s role but also encourages a balanced perspective on its integration into academia. Moreover, responsible AI use can address biases and ethical concerns by critically evaluating AI-generated content against rigorous academic standards.

As we enter a new era marked by digital transformation, known as postdigital academic writing, scholars increasingly integrate digital technologies like AI into their writing and research practices [7]. This approach acknowledges the transformative impact of AI on knowledge creation, dissemination, and understanding. Just as AI enhances industry performance when paired with human expertise [1], it has the potential to revolutionize academic writing and research by leveraging AI’s analytical capabilities alongside human creativity and insight.

Rather than succumbing to AI shaming, academic writers and researchers should embrace AI as a tool that, when used responsibly, can propel innovation and advance scholarly contributions. By embracing AI, academia can lead in demonstrating ethical and effective integration of AI into research practices, paving the way for enhanced knowledge discovery and scholarly excellence in the digital age.

Conclusion

The paper has defined AI shaming as criticism aimed at individuals or organizations for using artificial intelligence in performing tasks. It highlights how such shaming often stems from misconceptions about AI’s capabilities and ethical implications, including concerns about job displacement and the devaluation of human expertise. The reflects broader societal anxieties about technological progress, potentially hindering innovation and technology adoption. It outlines consequences such as increased stress among researchers, missed efficiency opportunities, and limitations on AI’s potential benefits in research and scholarship.

However, academic writers and researchers should not be ashamed of using AI when done responsibly and ethically. By embracing AI as a tool to augment human capabilities, being transparent about its use, and addressing ethical concerns, academia can lead the way in demonstrating responsible AI integration. This approach can help harness AI’s potential to advance knowledge and innovation while maintaining the integrity and rigor of academic work, ensuring that technological progress enhances rather than diminishes the value of human scholarship. This exemplifies collaborative intelligence, where humans and AI work together to solve problems [1]. By collaborating, they achieve results that neither could accomplish on their own.

While AI shaming may sometimes reflect valid concerns and anxieties about the ethical implications and potential drawbacks of AI, these fears should not be a reason to shame those who use AI responsibly. Instead, we should view these concerns as opportunities for learning and improvement. Addressing fears about AI involves understanding its limitations, ethical considerations, and potential biases. By openly discussing these issues and integrating responsible AI practices, academic communities can foster a more informed and ethical approach to AI integration.