1 Introduction

In recent years, the landscape of education has evolved significantly, with notable advancements (Nigam et al., 2021). Educational institutions are increasingly using blended learning approaches, which combine online and offline methods to improve students' academic performance and overall efficiency (Sok & Heng, 2023; Yu, 2023). This change is supported by integrating artificial intelligence (AI) technologies like ChatGPT (Generative Pre-trained Transformer). ChatGPT has become a valuable resource for adapting to new educational models and helping students complete assignments more effectively (İpek et al., 2023; Sok & Heng, 2023; Susnjak, 2022; Yu, 2023). Developed by the American AI company OpenAI, ChatGPT quickly gained over a million users within a week of its launch on November 30, 2022 (Dempere et al., 2023; Mollman, 2022; Sok & Heng, 2023). It has since become one of the most widely used AI tools in education (Chan & Lee, 2023; Dempere et al., 2023; Sok & Heng, 2023). ChatGPT is popular because of its ability to generate human-like responses to various text-based inputs, drawing from sources like books, articles, and websites (Adiguzel et al., 2023; Chan & Lee, 2023; Sok & Heng, 2023). It offers on-demand support, helps fill learning gaps, reinforces understanding, and supports self-paced learning anytime and anywhere (Chan & Lee, 2023).

Many studies have examined the potential of AI technologies in university education, discovering new ways to improve teaching methods, learning experiences, and student engagement (Adiguzel et al., 2023; Baidoo-Anu & Ansah, 2023; Chan & Hu, 2023; Chan & Lee, 2023; Nigam et al., 2021). Jordanian universities have actively embraced these technological advancements to bridge the digital divide and elevate educational standards (Alzoubi, 2024). Jordan's higher education sector serves about 342,000 students, with 74% attending public universities and 26% in private institutions. The Ministry of Education and the Ministry of Higher Education and Scientific Research oversee these institutions, working to improve learning experiences through emerging technologies, including AI (Bataineh & Ibbini, 2022). Notably, several public and private universities in Jordan have incorporated AI technologies into their curricula (Alzoubi, 2024; Bataineh & Ibbini, 2022). A quantitative study by Ajlouni et al. (2023) at the University of Jordan found that students generally had a positive attitude toward using ChatGPT as a learning tool, though concerns about data accuracy and overreliance were noted, highlighting the need for careful curriculum integration. The rapid adoption of ChatGPT in Jordanian universities has raised unique challenges for students, including potential plagiarism, decreased originality, overreliance on technology, reduced critical thinking skills, and a general decline in assignment quality (Gammoh, 2024).

While much research has focused on the perceived risks and benefits of using Generative AI technologies like ChatGPT from the perspective of students or both students and educators worldwide, these studies have mainly used quantitative methods (e.g., Baidoo-Anu & Ansah, 2023; Chan & Hu, 2023; Hung & Chen, 2023; Kasneci et al., 2023). There is a notable lack of attention to the specific challenges faced by educators when students use ChatGPT in academic assignments. Existing studies have largely ignored how these technologies affect educators' experiences and job performance (Adeshola & Adepoju, 2023; Aydın & Karaarslan, 2022; Frye, 2022; Susnjak, 2022). This study aims to address this critical gap by exploring the unique challenges educators in Jordan face, providing a comprehensive understanding of how these tools impact educational quality from the educators' perspective. Focusing on educators' experiences is crucial for responsible integration and developing strategies to mitigate potential negative effects on teaching and learning.

To guide this study, the following research questions were formulated:

  1. 1.

    What are the perceived challenges faced by academics in Jordan when students use ChatGPT in their academic assignments?

  2. 2.

    What strategies do university educators in Jordan propose to overcome the challenges they face when students use ChatGPT in academic assignments?

These research questions aim to gain an in-depth understanding of the perspectives of academics in Jordan regarding the integration of ChatGPT in educational settings and to explore practical solutions for addressing the identified challenges.

2 Literature

2.1 Introduction to ChatGPT in education

The incorporation of ChatGPT into educational settings has sparked a dual response within the academic community, promoting an examination of both the advantages and potential challenges. Hung and Chen (2023) explored how ChatGPT affects the learning of Chinese university students using a mixed-method approach. The results demonstrated enhancements in efficiency and improved time management skills, attributed to ChatGPT’s abilities to provide personalized and specialized assignment support at any time (Hung & Chen, 2023; Susnjak, 2022). ChatGPT’s quick, tailored responses made completing assignments easier and faster (Hung & Chen, 2023).

2.2 Benefits of ChatGPT in educational settings

Yu (2023) highlighted the diverse capabilities of ChatGPT in the realm of education. The global review showed that ChatGPT helps with editing and proofreading essays, answering academic questions, and guiding students through research and planning stages up to final submission (Yu, 2023). Particularly advantageous for students whose primary language is not English, ChatGPT emerges as a valuable writing tool (Yu, 2023). Chan and Hu (2023) also highlighted its benefits for non-native English speakers, such as generating ideas and offering constructive feedback. An experimental study by Warschauer et al. (2023) explored how AI-generated text can assist second language writers, including tasks like editing, answering questions, and explaining grammar or vocabulary. The study also found that ChatGPT is useful for research and writing assignments (Warschauer et al., 2023).

2.3 Concerns and challenges with ChatGPT use

However, educators have raised significant concerns about ChatGPT's use in academia. They worry about the potential for cheating, with students relying too much on AI for writing essays and assignments (Adeshola & Adepoju, 2023; Hung & Chen, 2023; Pavlik, 2023; Sok & Heng, 2023; Susnjak, 2022). There are also concerns about students misrepresenting AI-generated work as their own, making it hard for plagiarism detection tools to identify such content (Adeshola & Adepoju, 2023; Aydın & Karaarslan, 2022; Baidoo-Anu & Ansah, 2023; Frye, 2022; Kasneci et al., 2023; Peres et al., 2023; Sok & Heng, 2023; Susnjak, 2022; Yu, 2023). Studies have shown that AI-generated essays can be difficult for plagiarism detection software to catch, complicating efforts to prevent cheating (Adeshola & Adepoju, 2023; Aydın & Karaarslan, 2022; Frye, 2022). Furthermore, some students may use ChatGPT to plagiarize, leading to fraudulent submissions and issues with grading fairness (İpek et al., 2023). This reliance on AI can also hinder students from developing important skills like writing, critical thinking, and problem-solving (Adeshola & Adepoju, 2023; Susnjak, 2022). Educators emphasized that the use of AI-generated text can hinder students' development of writing, critical-thinking, and problem-solving skills (İpek et al., 2023; Susnjak, 2022). As a result, educators face more work in grading and adjusting assessment criteria (Adeshola & Adepoju, 2023; Hasanein & Sobaih, 2023). The challenges posed by AI in education have led to debates about the appropriateness of using tools like ChatGPT (İpek et al., 2023).

2.4 The global and Jordanian context

While considerable scholarly attention globally has focused on exploring the perceived risks for students associated with the use of GenAI technologies like ChatGPT in academia, examining perspectives from university students or both students and educators (Baidoo-Anu & Ansah, 2023; Chan & Hu, 2023; Hung & Chen, 2023; İpek et al., 2023; Kasneci et al., 2023; Sallam, 2023a; Sok & Heng, 2023; Warschauer et al., 2023; Yu, 2023), less attention has been given to the challenges faced by educators when students use ChatGPT in assignments. Most studies have not specifically looked at the experiences of educators (Adeshola & Adepoju, 2023; Aydın & Karaarslan, 2022; Frye, 2022; Susnjak, 2022). Even though AI tools offer benefits like saving time and providing feedback (Chan & Hu, 2023; Hung & Chen, 2023; Warschauer et al., 2023; Yu, 2023), there are still concerns about how these tools affect education quality, particularly in Jordan, due to the challenges they create for educators. It is crucial to understand the difficulties educators face with the use of ChatGPT in academic settings, as highlighted by Sallam et al. (2023). Academics play a key role in education (Chan & Lee, 2023; Ghotbi & Ho, 2021), and their insights are essential for understanding how these technologies affect their work and student education (Kumar, 2023). Addressing these challenges is important for improving teaching and learning and ensuring responsible use of AI tools like ChatGPT (Chan & Hu, 2023). Without proper oversight, GenAI applications like ChatGPT could spread misinformation or inaccurate content widely (Harrer, 2023).

2.5 Research focus and objectives

Consequently, the objective of this study is to gain an in-depth understanding of academics' perspectives in Jordan regarding the perceived challenges they face when students use ChatGPT in their assignments. Additionally, the study aims to explore the strategies or recommendations suggested by university educators in Jordan to mitigate these perceived challenges.

3 Methodology

3.1 Research rationale and approach

Although substantial scholarly focus has focused on exploring the perceived risks and benefits for students associated with the incorporation of GenAI technologies like ChatGPT in academia, (Baidoo-Anu & Ansah, 2023; Chan & Hu, 2023; Hung & Chen, 2023; İpek et al., 2023; Kasneci et al., 2023; Sallam, 2023a; Sok & Heng, 2023; Warschauer et al., 2023; Yu, 2023), less attention has been given to understanding the challenges faced by academics themselves when students employ ChatGPT in academic assignments (Adeshola & Adepoju, 2023; Aydın & Karaarslan, 2022; Dempere et al., 2023; Frye, 2022; Susnjak, 2022). A majority of these studies, conducted globally, have adopted methodological approaches such as systematic literature reviews (Aydın & Karaarslan, 2022; Dempere et al., 2023; Hung & Chen, 2023; İpek et al., 2023; Sok & Heng, 2023; Yu, 2023), quantitative analyses (Adeshola & Adepoju, 2023; Chan & Hu, 2023; Chan & Lee, 2023; Ghotbi & Ho, 2021), and experimental designs (Frye, 2022; Kumar, 2023; Pavlik, 2023; Susnjak, 2022; Warschauer et al., 2023). However, an apparent lack exists in the current body of literature regarding a rigorous examination of academics' perspectives on the potential challenges they face when students use ChatGPT in their academic submissions. This gap is particularly evident in the absence of an in-depth qualitative analysis conducted within the educational landscape of Jordan.

Therefore, the aim of this qualitative study is to explore the viewpoints of academics in Jordan concerning the perceived challenges they face when students use ChatGPT in their academic assignment submissions and what are some possible strategies that can be implemented to mitigate such perceived risks. Utilizing a convenience sampling method, interviews were conducted with 27 academics representing varying levels of professional experience across public and private universities in Jordan. The selection of the convenience sampling approach was guided by its focus on simplicity and accessibility in choosing academics, making it a rapid and practical method suitable for the scope of this study (Bogdan & Biklen, 1997).

3.2 Sample and data collection method

The study began by distributing invitation letters that included a concise overview of the study and authors’ contact details. These were extended to potential voluntary participants interested in participating in interviews. Employing the convenience sampling technique, these invitations were disseminated through diverse WhatsApp and Facebook groups dedicated to university academics in Jordan, groups in which the author is an active member. A total of 27 academics (14 females and 13 males), whose identities remain confidential, expressed interest and provided consent to partake in the interview process. The interviews, conducted in English, encompassed a diverse group of participants aged 28 to 60, representing various private and public universities, and coming from eleven departments across five distinct faculties. Their collective experience in higher education ranged from a minimum of 2 years to a maximum of 30 years, occupying diverse positions including full professors (n = 3), associate professors (n = 9), assistant professors (n = 12), and lecturers (n = 3). To preserve anonymity, codification (1, 2, 3, etc.) was applied, with detailed participant information available in Table 1.

Table 1 Participants’ details

The formulation of interview questions drew upon both existing literature and the researchers’ own academic expertise. These semi-structured interviews provided a flexible platform, allowing participants to openly and comfortably express their perspectives (Bell et al., 2022)on the challenges they face as university academics when their students use ChatGPT in their assignments. The interviews were conducted in English, each lasting approximately 45–60 min. Deviating from conventional recording practices, interviews were intentionally not recorded to foster an environment where participants felt unrestrained in sharing their challenges and experiences (Al-Twal, 2022). This decision, influenced by considerations of trust, holds particular relevance in the Middle East and North Africa [MENA] region (Al-Twal & Cook, 2022). Acknowledging the critical role of trust in eliciting reliable information from interviewees (Putnam et al., 1993), the absence of audio recording was compensated by cautious note-taking during and immediately after each interview. This practice aimed to capture relevant points and ensure the validity and accuracy of the data, as confirmed through sharing these notes with participants at the end of each interview (Bell et al., 2022).

3.3 Data analysis and interpretation

The qualitative data were analyzed thematically, following the procedures detailed by Rubin and Rubin (2011). Initially, a sufficient amount of time was dedicated to note-taking during interviews, composing notes immediately afterward, and transcribing the interviews to ensure accuracy and minimize reliance on memory, which could potentially skew the study results (Rubin & Rubin, 2011). To avoid mixing responses, only one interview was conducted and transcribed per day. This approach helped maintain the integrity of each participant's responses.

Text coding was done using the comment function in Microsoft Word, which ensured that the codes accurately reflected participants' intended meanings (Clarke & Braun, 2013). In the initial coding phase, themes discussed by participants were identified and coded. As the analysis progressed, the researcher became more selective, focusing on codes most relevant to the research questions. After coding the first six interviews, a codebook was created to standardize the process and ensure consistency in the application of codes (Clarke & Braun, 2013).

Open coding was conducted on the interview transcripts, where each segment of text was assigned a descriptive code. For example, statements about plagiarism detection tools were coded as 'Credibility.' After initial coding, the codes were reviewed and grouped into categories. For instance, codes such as 'Credibility,' 'Authenticity,' and 'Detection' were combined into a sub-theme labeled 'Difficulty in Assessing Assignment Authenticity.' By analyzing how these categories related to one another, broader themes were identified. This process led to the main theme 'Risks,' which encompassed sub-themes like Difficulty in Assessing Assignment Authenticity’. The codebook and the corresponding themes were detailed in the Supplementary File, providing transparency in how the data were systematically categorized. Table. Codebook in the Supplementary File shows authors’ codebook.

To ensure reliability and validity, the data were meticulously organized under their respective codes in a single file, with key points summarized for each code. The researcher manually compared content within codes to identify variations among participants, which included examining nuances in participants' expressions and the context of their statements. This process also involved addressing any conflicts or contradictions in participants' responses, which enriched the thematic analysis by highlighting the complexity and diversity of experiences among the interviewees. For example, while some participants viewed the use of plagiarism detection tools as essential for maintaining academic integrity, others expressed concerns about the potential for these tools to misinterpret honest work. Additionally, a process of constant comparison was employed, whereby each newly coded segment was compared with existing codes to refine the categories and ensure they accurately captured the data. This iterative process helped in refining the themes by merging overlapping codes and splitting broad codes into more specific sub-categories, thereby achieving a nuanced understanding of the data.

Finally, the themes were synthesized into a coherent narrative that linked the empirical findings to the research questions and broader literature. The final themes and sub-themes, as represented in Fig. 1, provided a structured interpretation of the data, highlighting the risks faced by educators and the proposed strategies to overcome those challenges. The systematic and thorough approach to thematic analysis, as detailed above, aimed to provide a robust and credible account of the participants' experiences and perspectives, contributing valuable insights to the study's objectives.

Fig. 1
figure 1

Coding process

4 Findings and discussion

The data analysis has provided significant insights into the challenges faced by university educators in Jordan regarding the implementation of ChatGPT in student assignments. Three main challenges have emerged: difficulty in assessing assignment authenticity, increased workload for plagiarism monitoring and detection, and job displacement. The following discussion elaborates on these challenges, their implications within the academic realm, and possible strategies for mitigation as suggested by university educators in Jordan.

4.1 Challenges

4.1.1 Challenge 1: Difficulty in assessing assignment authenticity

Academics have expressed significant challenges in assessing the authenticity of student assignments, particularly in distinguishing between student-generated work and content produced by ChatGPT. This concern is widespread among participants of various ranks, faculties, and levels of experience. For example, Participant (2), a 57-year-old professor of marketing, noted the difficulty in differentiating between high-quality student work and AI-generated assignments. He stated, “Because it’s a new AI tool, the main issue… is that I am unable to effectively judge and distinguish between excellent students who work themselves on the assignment and ChatGPT users who plagiarize and rely on the software to solve their assignments on their behalf.” This challenge is reflective of broader academic concerns, as studies suggest that AI tools like ChatGPT can blur the lines between original and plagiarized work (Dempere et al., 2023; Hasanein & Sobaih, 2023).

Concerns about the credibility of student work and the effectiveness of current plagiarism detection methods were voiced by participants across various academic ranks and fields. For instance, Participant (5), a 36-year-old assistant professor of business administration, remarked on the impact of lacking high-end plagiarism detection tools: “If the university does not have high-end tools for detecting plagiarism, this leads to a lack of credibility in evaluating the students’ submissions.” This sentiment aligns with the findings of İpek et al. (2023) and Susnjak (2022), who emphasize the need for advanced detection tools to uphold academic integrity. Participant (6), a 29-year-old lecturer of cybersecurity, expressed frustration over questioning the credibility of assignments: “I am literally questioning every assignment’s credibility!” This sentiment is particularly concerning given the rapid advancements in AI, which can produce highly sophisticated text (Chan & Lee, 2023). Similarly, Participant (17), a 55-year-old associate professor of translation, noted difficulties in distinguishing AI-generated text from original student work: “I am facing difficulty in distinguishing AI text outcomes and students’ original work.” This difficulty is compounded by the high expectations placed on faculty to accurately assess student submissions (Kumar, 2023). The overreliance of students on ChatGPT is supported by its’ ability to generate text quickly and efficiently, which may be seen as a useful tool for students facing time constraints or needing to produce a large amount of content (Chan & Hu, 2023; Chan & Lee, 2023; Frye, 2022). Participant (19), a 39-year-old male assistant professor of data science and AI, noted that students using AI tools could deceive faculty members with their assignments. “Students who rely on AI tools can fool faculty members with their assignments and homework”. This insight is supported by Yu (2023), who discusses the potential for AI to facilitate academic dishonesty. In contrast, Participant (22), a 44-year-old female assistant professor of biology and biotechnology, struggled to gauge students' authentic work and true understanding “I am unable to gauge students authentic work and identify their true level of understanding”. This concern underscores the broader implications of AI on educational assessment, as educators must ensure that grades reflect genuine student performance (Peres et al., 2023).

Regarding grading fairness, Participant (9), a 55-year-old male associate professor of biology and biotechnology, observed that failing to use plagiarism detection tools might lead to unfair grading: “Sometimes, if I do not check on the content of the assignment submission through a plagiarism detector, the student who used ChatGPT may get a higher mark.” Hasanein and Sobaih (2023) highlight the risk of unfair grading due to undetected AI-generated content. Participant (15), a 39-year-old female assistant professor of translation, mentioned the possibility of overestimating “AI plagiarized” work in grades: “It might lead to wrong evaluation… sometimes overestimating ‘AI plagiarized’ work in terms of grades.” Similarly, Participant (23), a 37-year-old female assistant professor of finance, noted that undetected use of ChatGPT could affect her judgment and lead to higher grades: “It can impact my judgment as an instructor on the submitted work and give higher grades for the students who used ChatGPT.” Frye (2022) warns of severe consequences, including failing grades and loss of credibility. Participant (26), a 48-year-old female associate professor of cybersecurity, observed that the varying quality and consistency of ChatGPT-generated assignments present greater challenges in grading: “I have noticed that assignments written using ChatGPT may be of varying quality and consistency, so it is placing greater challenges when I mark these assignments.” This variability complicates the already challenging task of evaluating student work fairly (Chan & Hu, 2023).

Overall, the overreliance on ChatGPT poses a significant challenge for educators, impacting assignment authenticity detection, grading fairness, and the development of critical skills. Gammoh (2024) and Singh et al. (2023) highlight the risk of university students’ diminished independent research efforts due to AI dependency. The concerns about AI's impact on academic integrity, critical thinking, and ethical assessment indicate a pervasive issue in higher education that requires comprehensive strategies.

The study revealed no significant differences in concerns across faculty ranks, suggesting that the challenges posed by ChatGPT are recognized universally. However, specific challenges varied: technical fields like cybersecurity and data science noted difficulties in detecting sophisticated AI content, while humanities and social sciences focused more on the ethical implications. This variation underscores the multifaceted nature of the issue and the need for tailored solutions.

4.1.2 Challenge 2: Increased workload for plagiarism monitoring and detection

The use of ChatGPT has increased the workload for educators, requiring additional time to monitor assignments for AI-generated content, identify plagiarism, and verify accuracy. This additional burden was a common theme among participants, reflecting a significant increase in workload across ranks, faculties, and experience levels.

Participant (2), a 57-year-old male professor of marketing, expressed frustration with the time spent detecting plagiarized information and correcting inaccurate data from ChatGPT: “I am actually wasting my time to detect plagiarized information and placing more effort in discovering inaccurate and sometimes irrelevant information generated by the software.” This concern aligns with findings by Gammoh (2024) and Hasanein and Sobaih (2023) who suggest that easy access to information through ChatGPT might lead to its overuse, diminishing students' critical thinking and research skills. Participant (7), a 32-year-old female assistant professor of engineering, noted spending considerable time correcting incorrect information copied from ChatGPT: “I spend more time and effort in correcting the incorrect information that the student submits which is copied from ChatGPT, without any thought or revision.” This issue is corroborated by studies highlighting inaccuracies in ChatGPT-generated information (Ghotbi & Ho, 2021; Gravel et al., 2023; Harrer, 2023; Hasanein & Sobaih, 2023; Lin, 2023; Sallam, 2023b).

Participant (12), a 38-year-old female assistant professor of marketing, reported increased burden in discerning whether students researched independently or relied on ChatGPT: “When they use ChatGPT in their homework, it is increasing our burden to disclose who actually researched and solved the requirements on their own and who relied solely on the tool.” Grassini (2023) identified that ChatGPT's raw data could lead to unreliable information, increasing the need for educators to correct assignment mistakes (Gravel et al., 2023; Sallam, 2023b). Participant (20), a 45-year-old male associate professor of business administration, mentioned fatigue from online submissions: “It makes me tired now when I ask students to submit their assignments online, as I spend time and effort to detect students’ own mistakes and monitor the plagiarism weight properly.” This challenge is exacerbated by ChatGPT's ability to evade plagiarism checks with varied responses (İpek et al., 2023; Zhai, 2022).

Differences in participants' responses indicate that the challenges are experienced universally but with variations based on rank and faculty. Senior faculty, such as Participant (2) and Participant (20), emphasized the time-intensive nature of detecting and correcting AI-generated content. In contrast, younger faculty members, like Participant (7) and Participant (18), focused on the effort required to ensure fairness and accuracy in evaluations.

Participant (15), a 39-year-old female assistant professor of translation, described spending hours verifying sources due to inadequate plagiarism detectors: “Time wasting at its best… I am spending hours just trying to double check the resources of the submitted work, as detectors do not properly identify the plagiarized part or how the student used ChatGPT.” This sentiment is echoed in literature, emphasizing the difficulty plagiarism detection software faces with ChatGPT (Adeshola & Adepoju, 2023; Aydın & Karaarslan, 2022; Baidoo-Anu & Ansah, 2023; Frye, 2022; Kasneci et al., 2023; Peres et al., 2023; Sok & Heng, 2023; Susnjak, 2022; Yu, 2023).

Participant (18), a 29-year-old male lecturer of business administration, expressed a desire for fair evaluations, spending effort to determine if ChatGPT was used for grammar checks or cheating: “I do not want to be unfair… I spend effort checking the submissions and trying to figure out if my students used ChatGPT only for grammar checks or paraphrasing rather than cheating.” Participant (21), a 41-year-old male assistant professor of human resources, noted the challenge in distinguishing between legitimate assistance and plagiarism: “Mainly, the struggle is that some students are using ChatGPT for legitimate reasons, but it is very hard to differentiate between valid and invalid uses.”

To note, the theme of increased workload due to ChatGPT was mentioned across different faculties, including marketing, engineering, business administration, and human resources. However, the emphasis on ensuring fairness and accurately assessing students' understanding appeared more frequently among junior faculty members and lecturers, suggesting that less experienced educators might be more concerned with maintaining equitable standards. The recurring theme among participants highlights the significant impact of ChatGPT on educators' workloads, reflecting a broader issue identified in multiple studies. The complexity for plagiarism detection software in discerning the manner in which ChatGPT was utilized complicates the educators' task of maintaining academic integrity while ensuring fair evaluations (Adeshola & Adepoju, 2023; Aydın & Karaarslan, 2022; Baidoo-Anu & Ansah, 2023; Frye, 2022; Kasneci et al., 2023; Peres et al., 2023; Sok & Heng, 2023; Susnjak, 2022; Yu, 2023).

4.1.3 Challenge 3: Concerns about job displacement

This section explores the prevalent fear among educators regarding the integration of AI tools like ChatGPT in education, particularly its potential impact on job security and the roles of human instructors. The concern stems from the perception that AI may eventually automate tasks traditionally performed by educators, such as grading assignments, providing feedback, and even delivering lectures. This apprehension is widespread, cutting across different academic ranks and disciplines, reflecting a broad concern about AI's potential threat to traditional educational roles.

Participants expressed significant concerns about the future role of human educators in the face of AI advancements. Participant 1, a 33-year-old male assistant professor of intelligent robotics, stated, “ChatGPT, I feel, is eliminating the role of instructors as it replaces the fundamental interaction between students and educators in the educational journey.” Similarly, Participant 5, a 36-year-old female assistant professor of business administration, lamented, “ChatGPT will likely diminish the role of faculty members and may lead to their redundancy.” These sentiments highlight a pervasive fear that AI could render human educators obsolete over time. This apprehension is not confined to specific disciplines or levels of seniority. Participant 9, a 55-year-old male associate professor of biology and biotechnology, expressed concern that “AI's increasing capabilities might lead students to rely less on human instructors and more on self-learning through AI.” Participant 16, a 60-year-old male professor of civil engineering, added, “In the future, say 20–25 years from now, educators may no longer be necessary.” These perspectives illustrate a widespread anxiety about the longevity of traditional teaching roles in an AI-dominated educational landscape.

Research findings mirror these concerns, highlighting the potential impact of ChatGPT on employment within education. Studies by Adeshola and Adepoju (2023), Hatzius (2023), and Zhai (2022) emphasize worries about job displacement or reduced demand for human instructors due to AI's capabilities. This anxiety is a common theme in discussions about AI’s role, often centered on the fear of job loss (Adeshola & Adepoju, 2023).

Educators also expressed concerns about their ability to adapt to rapidly evolving educational practices driven by AI technologies. Participant 7, a 32-year-old female assistant professor of engineering, voiced uncertainty: “With the rapid advancement of ChatGPT and similar AI technologies, I worry about our capacity to keep pace.” This concern was echoed by Participant 14, a 55-year-old female associate professor of civil engineering, who questioned, “How can we keep up with such rapid technological advancements in AI?” These concerns are supported by recent research indicating the potential for AI to automate various educational tasks traditionally performed by humans, such as grading, feedback provision, and even lecture delivery (Hatzius, 2023; Zhai, 2022). These studies underscore the dual nature of AI's impact on education—while it may streamline administrative functions, it also poses challenges to the role and relevance of human educators (Adeshola & Adepoju, 2023).

However, not all participants shared the same level of concern regarding job displacement by AI. For instance, Participant 22, a 44-year-old female assistant professor of biology and biotechnology, remarked, “ChatGPT and AI tools cannot help students in the emotional side!” This observation underscores that AI tools like ChatGPT cannot adequately address the emotional aspects of teaching, highlighting the irreplaceable role of human engagement and compassion in educational practices. This sentiment aligns with studies emphasizing the importance of human interaction, creativity, and empathy in education, areas where AI currently falls short (Dempere et al., 2023; Felix, 2020; Murtarelli et al., 2021). Adeshola and Adepoju (2023) argue that some educators perceive AI as lacking the emotional engagement necessary for motivating students, which may diminish student engagement. Interpersonal interaction remains a cornerstone of the educator's role. Similarly, Dempere et al. (2023) and Murtarelli et al. (2021) have pointed out that AI's limitations include creativity (e.g., inventing new courses or developing innovative teaching methods), interpersonal interaction (e.g., counseling, providing personalized feedback, and resolving student issues), complex reasoning, and empathy—core aspects of educational practice.

While acknowledging the potential benefits of AI in education, such as streamlining administrative tasks, it is essential to address educators' concerns about job security and the future of their roles. AI integration should be seen as a complement to, not a replacement for, human educators. This approach preserves the essential human aspects of education while leveraging AI's capabilities to enhance learning outcomes (Adeshola & Adepoju, 2023; Zhai, 2022). Rather than displacing educators and administrators, AI should be viewed as a supportive tool that enhances and facilitates education. Therefore, educational institutions in jordan must raise awareness among educators about these dynamics to alleviate concerns about job security, especially in light of feedback from university educators in Jordan.

4.2 Strategies

In response to inquiries about potential recommendations for mitigating the challenges faced by university educators in Jordan when students implement ChatGPT into their assignment submissions, which include difficulty in assessing assignment authenticity, increased workload for plagiarism monitoring and detection, and job displacement, academics have put forward several suggestions.

4.2.1 Strategy 1: Enhancing plagiarism detection tools

To address the challenge of assessing assignment authenticity, universities in Jordan should enhance the availability and quality of plagiarism detection tools. Providing all academics with access to high-quality plagiarism detection services that include AI detection capabilities is essential. Participant 6, a 29-year-old female lecturer in cybersecurity, emphasized, “These will help us as educators in detecting AI-written assignments and assessing students’ learning outcomes.” Participant 2, a 57-year-old male professor of marketing, agreed, stating, “Running submissions through an AI detector ensures students are not assessed unfairly.” This aligns with Yu (2023), who highlighted the need for continuous improvement in plagiarism detection mechanisms to ensure fair evaluations that accurately reflect students' abilities. Additionally, promoting ethical practices is crucial. Participant 10, a 60-year-old male professor of business administration, stressed, “Ensuring that students are aware of disciplinary actions for plagiarism according to university regulations.” Similarly, Participant 15, a 39-year-old female assistant professor of translation, suggested, “Developing and sharing ethical frameworks and academic integrity principles for using ChatGPT in education.” This recommendation is consistent with literature advocating for consistent laws and regulations that align AI tools with educational ethics (Hasanein & Sobaih, 2023; Yu, 2023).

To effectively implement this strategy in Jordan, universities should invest in and provide training sessions for educators to familiarize them with these tools, ensuring they can effectively identify AI-generated content. Promoting awareness among students about the consequences of academic dishonesty, including the use of AI tools for plagiarism, is also essential. Universities in Jordan can develop and disseminate clear guidelines on academic integrity that incorporate the ethical use of AI in education.

4.2.2 Strategy 2: Modifying assessment methods to reduce dependency on AI-generated content

To reduce the increased workload for plagiarism monitoring and detection, it is recommended that educators in Jordan modify assessment methods to minimize reliance on AI-generated content and enhance students' critical thinking skills. Participant 10 mentioned, “I now replace previously designed assignment questions (pre the ChatGPT era) with ones that are more able to assess students' critical thinking and problem-solving skills that go beyond the capabilities of ChatGPT.” Participant 19, a 39-year-old male assistant professor of Data Science and AI, suggested, “Use assignments that revolve around interactions and practical activities that cannot be created using ChatGPT.” Similarly, Participant 23, a 37-year-old female assistant professor of finance, advocated for more in-class activities: “Rely on face-to-face teaching and in-class activities to better assess students’ learning outcomes instead of 'take-home' assignments.” Literature supports these strategies, suggesting modifications in assessment requirements to foster critical thinking and independent work (Hasanein & Sobaih, 2023; Singh et al., 2023; Sok & Heng, 2023; Susnjak, 2022). For instance, Sok and Heng (2023) highlight the importance of tasks such as debates, group discussions, and presentations. Assignments involving literature review, gap identification, and research proposal development demand active student engagement, which can significantly reduce reliance on AI tools (Singh et al., 2023). Excessive reliance on ChatGPT-generated content may adversely affect students' cognitive and creative-thinking abilities (Aydın & Karaarslan, 2022; İpek et al., 2023; Rudolph et al., 2023; Singh et al., 2023; Susnjak, 2022). Given the importance of critical thinking for cognitive development and problem-solving, educators and students should focus on generating new knowledge through analyzing and synthesizing ideas from diverse assignments (Gözüm et al., 2019; İpek et al., 2023; Rudolph et al., 2023). Critical thinking is crucial not only for academic success but also for future career development (Hasanein & Sobaih, 2023; Singh et al., 2023).

To reduce reliance on AI-generated content, educators in Jordan should design assessment methods that challenge students' critical thinking and problem-solving abilities. This can include incorporating more in-class activities, such as debates, presentations, and group discussions, that require active participation and cannot be easily replicated by AI tools. Additionally, assignments that involve practical tasks, such as case studies or projects based on real-world scenarios, can help students apply theoretical knowledge in a meaningful way. Universities in Jordan should encourage faculty members to continuously update and diversify their assessment approaches to align with these goals.

4.2.3 Strategy 3: Emphasizing the irreplaceable human elements of teaching

To address concerns about job displacement, educators in Jordan should emphasize the irreplaceable human elements of teaching. Participant 22, a 44-year-old female assistant professor of biology and biotechnology, remarked, “ChatGPT and AI tools cannot help students in the emotional side!” Participant 8, a 55-year-old female associate professor of human resources, added, “ChatGPT cannot ‘feel’ or ‘assess’ the exact needs of students… it cannot encourage critical thinking like an instructor does.” Studies support this perspective, emphasizing AI's limitations in areas requiring creativity, empathy, and complex reasoning (Dempere et al., 2023; Felix, 2020; Murtarelli et al., 2021). Adeshola and Adepoju (2023) argue that AI tools lack emotional engagement, which is crucial for maintaining student motivation and engagement. Similarly, Dempere et al. (2023) and Murtarelli et al. (2021) highlight that AI's limitations include creativity (e.g., developing new courses or teaching methods), interpersonal interaction (e.g., counseling, personalized feedback), complex reasoning, and empathy—key aspects of effective teaching. Therefore, AI should complement rather than replace human educators, enhancing rather than substituting the essential human elements of teaching (Zhai, 2022). Educational institutions should raise awareness among educators about these dynamics to address concerns about job security, especially in light of feedback from university educators in Jordan.

To address concerns about job displacement, educators in Jordan should focus on the unique human aspects of teaching, such as providing emotional support, personalized feedback, and fostering a collaborative learning environment. Universities in Jordan can organize workshops and seminars that emphasize the importance of these elements, helping educators to enhance their interpersonal skills and adapt their teaching methods. Encouraging mentorship programs where experienced faculty members guide newer instructors can also help maintain the human touch in education, ensuring that AI tools serve as complementary resources rather than replacements.

Further analysis of the interview data reveals differences in perspectives based on rank, faculty, and experience. Concerns about assignment authenticity were mentioned by both junior and senior faculty, indicating a widespread issue. However, the emphasis on modifying assessments was notably highlighted by those in fields reliant on practical and interactive learning methods, such as Data Science and AI (Participant 19) and Finance (Participant 23). Concerns about job displacement and the need to emphasize emotional aspects of teaching were more strongly expressed by senior faculty members, who might feel a greater sense of job security threat due to their longer tenure and experience in traditional teaching methods (Participants 22 and 8). These insights reflect varying perspectives among educators and align with broader trends observed in educational research. For example, Hasanein and Sobaih (2023) and Singh et al. (2023) have also identified the need for adaptive assessment methods to counteract the over-reliance on AI tools. The literature also underscores the importance of human interaction and emotional engagement in teaching, which AI tools cannot replicate (Dempere et al., 2023; Felix, 2020; Murtarelli et al., 2021).

In conclusion, the recommendations provided by university educators in Jordan, along with insights from existing literature, offer comprehensive strategies to address the challenges posed by ChatGPT in academic settings. These strategies aim to enhance the teaching and learning dynamics within Jordanian universities, ensuring that the integration of AI tools like ChatGPT complements the educational process while preserving the crucial human elements of teaching.

4.3 Theoretical implications of the findings

This study contributes to the existing literature by addressing a critical gap in understanding the challenges faced by educators when students use generative AI tools like ChatGPT in academic assignments. While previous research has primarily focused on the perceived risks and benefits from the perspective of students, this study shifts the focus to educators, particularly in the context of Jordan. By employing a qualitative approach, the research offers a nuanced understanding of how these tools impact educators' experiences, assessment practices, and job security in Jordanian educational institutions..

The findings of this study have significant theoretical implications, particularly in the context of academic integrity and the integration of AI in education. This research contributes to existing knowledge by highlighting a novel challenge within the realm of academic integrity—the use of AI tools, specifically ChatGPT, by students in completing assignments. The study builds on and extends theories of academic integrity by introducing the concept of "AI-integrated integrity challenges."

This concept underscores the need to revisit and potentially revise existing theoretical frameworks surrounding academic integrity. Traditional theories have largely been predicated on the assumption of human-generated work, but the advent of AI tools like ChatGPT necessitates a re-evaluation of these frameworks to account for non-human contributions to academic work. This study suggests that current models of academic integrity may be insufficient to address the complexities introduced by AI technologies, particularly in how educators in Jordan and beyond assess and validate student learning outcomes.

Moreover, the study's findings suggest a theoretical shift in how educators and institutions should approach academic assessment. The traditional focus on preventing plagiarism and ensuring the authenticity of student work now requires an additional layer of consideration regarding AI-generated content. The study proposes that theories of educational assessment must evolve to integrate AI literacy as a core component, thereby equipping educators and students with the necessary skills to navigate the ethical and practical challenges posed by AI.

In summary, this study advances theoretical discourse by identifying gaps in existing academic integrity frameworks and suggesting a re-conceptualization that includes AI as a critical factor. This contribution is pivotal as it not only addresses a current issue within the Jordanian context but also lays the groundwork for future research on the implications of AI in educational settings.

5 Conclusions

While the educational landscape has seen remarkable progress, it has also encountered new challenges, particularly concerning the integration of emerging technologies like ChatGPT into academic settings. Much attention has been devoted to exploring the risks to university students, but there remains a notable gap in understanding the challenges faced by educators themselves, especially within Jordanian universities. This study uniquely contributes to the existing body of research by focusing on the perceptions of academics in Jordan, a context previously overlooked. By moving beyond a closed-end quantitative approach, this research delves deeply into the insights, challenges, and personal experiences of 27 academics from both public and private universities in Jordan through semi-structured interviews.

The key findings reveal three main challenges: assessing assignment authenticity, managing an increased workload for plagiarism detection, and grappling with the potential risk of job displacement. These findings underscore the unique difficulties faced by educators, an area that has received less attention compared to the impacts on students.

Firstly, the difficulty in assessing assignment authenticity highlights the impact of AI's ability to produce high-quality, seemingly original content, which complicates the differentiation between genuine student work and AI-generated outputs. This challenge is exacerbated by the limitations of current plagiarism detection tools, which struggle to keep pace with the sophistication of AI-generated content. Educators across various disciplines have expressed concerns about maintaining academic integrity and ensuring fair evaluation, stressing the urgent need for enhanced detection mechanisms and updated academic integrity policies.

Secondly, the increased workload associated with monitoring and verifying student assignments has been a recurring theme among educators. The additional effort required to identify AI-generated content and correct inaccuracies has placed a significant burden on academic staff, revealing a broader issue of resource allocation and support. This finding aligns with existing research on the administrative challenges posed by AI tools, emphasizing the need for institutional support and improved detection technologies.

Finally, the fear of job displacement reflects a profound concern about the future role of educators in an AI-enhanced educational landscape. While AI tools offer potential efficiencies and capabilities, there remains an irreplaceable need for human interaction, empathy, and complex reasoning in teaching. The study highlights that while AI can complement educational practices, it cannot fully replace the nuanced and emotionally intelligent aspects of human teaching.

The study's theoretical contributions fill a significant gap in the literature by focusing on the educator-specific risks associated with AI technologies. It enriches existing frameworks on technology integration in academic settings by highlighting the nuanced challenges faced by educators, thereby extending our understanding of academic practices in the age of AI.

Practically, the study offers actionable recommendations to address these challenges. Enhancing the availability and quality of plagiarism detection software and promoting ethical academic practices can help address issues related to assignment authenticity. To alleviate the increased workload from monitoring AI-generated content, it is advised that educators modify assessments to reduce AI dependency and foster critical thinking. Furthermore, heightening awareness among educators about the irreplaceable human elements of teaching—such as emotional engagement and personalized instruction—can help mitigate fears of job displacement and emphasize the unique value of human interaction in education. By implementing these strategies, Jordanian universities can create a supportive environment that not only addresses the challenges posed by AI tools like ChatGPT but also leverages their potential to enhance teaching and learning dynamics. This study’s findings provide a foundation for future research to explore diverse educational contexts and methodologies, ensuring informed decision-making as AI technologies continue to evolve.

Despite the valuable insights provided by this study, there are several important limitations that need to be acknowledged. Firstly, the research is geographically confined to Jordan, which may limit the applicability of the findings to other regions with different educational frameworks and cultural contexts. As a result, the specific challenges faced by educators in Jordan might not fully reflect those encountered in other countries, potentially affecting the generalizability of the results. Secondly, the sample size of 27 academics, although varied in terms of professional background, may not capture the full range of perspectives within the broader academic community in Jordan. This limitation suggests that some viewpoints might be underrepresented, which could impact the overall robustness and comprehensiveness of the findings. The exclusive reliance on semi-structured interviews as a data collection method introduces additional concerns, such as potential biases and subjectivity from both interviewees and researchers. To address these biases and obtain a more balanced understanding, future studies should consider incorporating additional methods, such as surveys, focus groups, or case studies.

The limitations of this study may have influenced the findings by narrowing the focus to the Jordanian context and limiting the diversity of perspectives. To build upon these insights, future research should aim to expand the geographic scope beyond Jordan to include various cultural and educational contexts. This would enhance the generalizability of the findings and offer a more comprehensive view of the challenges related to AI integration in education. Additionally, increasing the sample size and diversity of participants would provide a more representative perspective of the issues at hand. Employing mixed-methods approaches, which combine qualitative and quantitative data collection, could help mitigate biases and provide a more nuanced understanding. Furthermore, exploring both the positive and negative aspects of AI tools like ChatGPT would contribute to a more balanced view of their impact on education. Future research should also focus on comparative studies between developing and developed countries to better understand how different educational systems and cultural contexts influence the integration and effects of AI technologies. Such studies could provide valuable insights for policymakers and educators worldwide. Given the rapid evolution of AI technologies, longitudinal studies are also essential to track how challenges and responses to these tools develop over time, ensuring that research remains relevant and responsive to technological advancements. By addressing these areas, future research can build on the current study’s findings and contribute to a more comprehensive theoretical framework for understanding the role of AI in education.