Keywords

1 Introduction

Upon its debut in the public domain, ChatGPT took the world by storm, becoming a trending topic across social media platforms, prestigious academic journals [1, 2], and high-quality news outlets [3]. This ground-breaking AI technology, known as Generative AI, boasts unparalleled capabilities in executing highly complex tasks such as crafting academic articles [4], stories, poems, essays [5], summarising or expanding text, adjusting content for alternative perspectives, passing professional qualification exams, and even writing and debugging programming code [6, 7].

The significant potential of ChatGPT-like AI systems in the realm of higher education has sparked a heated debate among educators and education researchers. While some view the introduction of Generative AI technologies as the future of learning and teaching, others express scepticism and concern that it may undermine the educational system, rendering teachers and students less motivated and with diminished abilities. With Generative AI garnering attention and becoming a popular tool among students, it is crucial to comprehend its impact on higher education and address potential risks. The pressing question arises: Are Generative AI technologies a boon or bane for higher education? To explore this inquiry, this paper reviews both the advantages and potential drawbacks of utilising Generative AI in education, as well as the implications for higher education practices. Strategies that can help HEIs embrace opportunities and mitigate risks are provided.

2 Method

Due to the newness of the topic and the limited number of academic publications available, conducting a comprehensive and systematic literature review was not feasible for this enquiry. The information available from existing publications was insufficient for keeping up with the rapidly evolving landscape of Generative AI. As a result, a desk research approach [8] was adopted with an expanded scope of literature, while carefully considering the quality of information sources. This approach facilitated the synthesis of recently published articles and their key findings while also covering the most updated information and emerging developments on the topic. Therefore, this paper offers valuable insights to researchers, practitioners, and policy-makers for addressing the impact of Generative AI in higher education.

The literature search was conducted on 15 April 2023. The databases searched in this review included those identified as reputable sources that index research relevant to AI and education. More particularly, as Generative AI in higher education may be considered as the interplay between science and social sciences domains, relevant literature was therefore identified by searching on the ERIC, CiteSeerX, ScienceDirect, Web of Science, ProQuest, JSTOR, Scopus, SpringerLink and Google Scholar electronic databases. Platforms for self-archiving of preprints of manuscripts such as ResearchGate, arXiv and SSRN were also searched. As the ChatGPT and the notion of Generative AI are relatively new, the selection of papers was therefore not restricted to peer-reviewed journal papers but included reports from reputable sources as well as high-quality media articles in the English language published up to the time when this review was being conducted.

A title search with the terms “ChatGPT” and “Generative AI” was performed. The title and abstracts of the search results were further assessed for relevance and value. To be eligible for this desk review, articles had to discuss ChatGPT/Generative AI in an educational context without constraints on specific settings. With the identification of the distinct articles that met the eligibility criteria, content analysis [9] was then carried out to generate themes for further analysis.

3 Findings

3.1 Opportunities

Generative AI as a Teaching Aid.

AI-generated content can be invaluable for HEI teachers when it comes to preparing course materials and stimulating classroom discussions. By using ChatGPT, educators can generate discussion questions, case studies, or problem sets tailored to their specific teaching objectives. This enables teachers to create a more dynamic learning environment that caters to diverse student interests and abilities. ChatGPT can also serve as a valuable partner in curriculum design. Teachers can consult AI for ideas on designing or updating curricula, creating assessment rubrics, or setting specific goals, such as increasing accessibility for diverse learners. For example, Megahed, Chen, Ferris, Knoth, & Jones-Farmer [10] asked ChatGPT to generate a course syllabus for an undergraduate statistics course. They noted that its results could be adopted without the need for major changes. This collaboration significantly improved the efficiency of HEI teaching staff preparing their courses. ChatGPT can help HEI teaching staff generate exercises, quizzes, and scenarios for student assessment [11]. Low-stakes tests are considered academically beneficial as assessments for learning [12]. Crafting well-structured questions, supplying scores and feedback, and making sure questions align with students’ anticipated knowledge demands substantial time and effort. Generative AI can come to the aid by producing practice questions and offering focused feedback. While Al-Worafi, Hermansyah, Goh, & Ming [13] cautioned that the assessment tasks suggested by ChatGPT might not cover all targeted learning objectives, they are, if used with caution, found effective in guiding HEI teaching staff to prepare assessments [14]. Generative AI has been employed to facilitate collaboration and peer learning in educational settings. Wang, Li, Feng, Jiang, & Liu [15] investigated the use of AI-generated content to encourage collaborative problem-solving and found that students exhibited improved critical thinking and teamwork skills. Generative AI models have been found to improve student-teacher interactions. In a study [16], AI-powered chatbots were used to assist teachers in providing real-time feedback and support to students. The results indicated that AI-enhanced interaction led to a more efficient and engaging learning experience. Generative AI can also enhance active learning by transforming the teaching and learning paradigm. For instance, Rudolph, Tan, & Tan [17] proposed using the flipped learning approach, which requires students to prepare for lessons by studying pre-class materials with ChatGPT. This method allows for more class time to be dedicated to learning activities such as group discussions and problem-solving. The adaptability of generative AI models has proven beneficial for teaching students with special needs. In a study [18], AI-generated content was used to create accessible learning materials for students with visual impairments, resulting in enhanced engagement and learning outcomes.

Generative AI as Learning Companions for Students.

One of the most promising applications of generative AI is personalised tutoring for students. For example, ChatGPT can provide students with immediate feedback on their work, identify areas for improvement, and suggest targeted resources or exercises. By working in tandem with human instructors, generative AI can make learning more effective. A study [19] investigated the application of AI-based content generation in e-learning, finding that AI-generated materials led to higher engagement and satisfaction among learners. Similarly, another [20] explored the use of GPT for personalised tutoring, observing improved learning outcomes compared to traditional teaching methods. Generative AI can help students navigate through complex concepts by providing support and additional resources. Acting as a “guide on the side,” this technology can offer explanations, examples, and analogies to facilitate a deeper understanding of the subject matter. As a virtual tutor, ChatGPT can assist students in independent study by answering their questions [21]. This is especially beneficial for students who may struggle with certain topics or need extra reinforcement. Several studies [22, 23] suggest that students can benefit from using ChatGPT as a scaffolding tool for their initial draft, and then refining the draft by correcting errors and adding references to the final versions of their written assignments. Gilson et al. [24] noted that ChatGPT’s initial answer could prompt further questioning and encourage students to apply their knowledge and reasoning skills.

3.2 Challenges

1) Insufficient Content Accuracy and Lack of Contextual Understanding. Generative AI models may occasionally produce misleading or incorrect information, which raises concerns about the quality and reliability of AI-generated content. ChatGPT exhibits limited comprehension of the meaning behind the words it processes [25]. While it recognises patterns and produces seemingly plausible responses, the system is not able to fully grasp the underlying concepts [26]. This shortcoming may result in responses that lack depth and insight [27] or veer off-topic [28], particularly when addressing tasks that demand a nuanced understanding of specialised domain knowledge [29]. ChatGPT’s ability to assess the credibility of its training data is also limited, as it lacks the human capacity for critical evaluation [30]. This constraint affects its ability to gauge the accuracy of the information it generates [31]. As a result, ChatGPT sometimes writes plausible-sounding but erroneous or illogical responses [10]. Furthermore, ChatGPT currently possesses limited knowledge of world events beyond 2021 [2]. As knowledge continues to expand, this limitation might occasionally result in the delivery of outdated or inaccurate responses [32]. For instance, when prompted to provide up-to-date references, ChatGPT may fabricate seemingly plausible citations that do not correspond to genuine sources [22]. The limitations in understanding context and discerning the true meaning behind words may hinder their effective use in educational settings. For instance, when employing ChatGPT for personalised learning, the AI system might lack an in-depth grasp of national and school-based curricula, individual students’ learning styles, and the cultural context in which they live. This could lead to generated content that is ill-suited for the learners, sometimes being excessively challenging or overly simplistic. Another concern relates to the use of ChatGPT for essay grading. The AI system may not possess the necessary context and background knowledge to accurately evaluate and grade a student’s work. These examples emphasise the importance of considering the limitations of Generative AI systems when integrating them into educational environments.

2) Data Privacy, Transparency and Security Concerns.

Generative AI models often rely on vast datasets that may include sensitive student information like demographics, academic performance, and behavioural patterns. Some academics have expressed their concerns with increasing integration of AI systems into educational settings such as excessive surveillance and monitoring of students [33]. Ensuring the privacy and security of data is crucial, as unauthorised access, data breaches, or misuse can result in severe consequences, including identity theft and unjust labelling. The decision-making processes of generative AI models can be intricate and challenging to comprehend, potentially limiting the transparency of AI systems [34]. Establishing transparency and accountability is vital for fostering trust among educators, students, and AI systems. This includes providing lucid explanations of the AI model’s functioning, the data it employs, and the rationale behind it. In April 2023, Italy became the first nation to ban ChatGPT due to privacy concerns. The country’s data protection authority highlighted the lack of a legal basis for collecting and storing the personal data used to train ChatGPT. Furthermore, ethical concerns were raised about the tool’s inability to determine a user’s age, potentially exposing minors to age-inappropriate content. This case brings attention to the wider concerns related to the types of data collected, the organisations responsible for acquiring it, and the manner in which it is employed in artificial intelligence systems. OpenAI, a private company, provides both free and subscription-based access to ChatGPT. Despite this, questions arise regarding profit-driven motives and the potential commercial use of data in the future. Italy’s decision accentuates the necessity for well-defined legal frameworks to oversee the handling of personal data within AI systems while addressing the privacy and ethical implications that emerge.

3) Algorithmic Bias and Discrimination Risks.

AI algorithms may unintentionally reproduce biases [35, 36]. For example, Lucy & Bamman [5] reported gender and representation bias in GPT-generated stories. Several factors contribute to this issue, including pre-existing biases in training data, algorithmic design, and societal context. Generative AI models derive their knowledge from the data on which they are trained. In accordance with the ‘garbage-in-garbage-out’ principle, if the training data contains biases or inaccuracies, the AI system may inadvertently perpetuate or amplify these biases [37, 38]. Consequently, it can result in disparate treatment of specific student groups and potential social prejudice/discrimination based on factors such as race, gender, socioeconomic background, or learning abilities as AI generate content.

4) Accessibility and Digital Divide Issues.

The incorporation of generative AI in education may exacerbate extant disparities between students with access to advanced technologies and those without. There are two primary concerns regarding generative AI’s accessibility. The first concern pertains to the limited availability of the tool in certain countries due to government regulations, censorship, or other internet restrictions. These constraints may hinder the adoption and utilisation of ChatGPT in regions where its potential benefits could be significant. The second concern relates to broader issues of access and equity, specifically the unequal distribution of internet availability, cost, and speed. According to data from ITU [39], roughly 66 percent of the world’s population have access to the Internet, and there is a substantial digital divide between developed and developing countries. This disparity in connectivity presents challenges for the equitable distribution of AI tools like ChatGPT, as individuals in regions with limited internet access may not be able to take advantage of the technology. Moreover, those with limited access to the internet and AI technology may face additional disadvantages due to insufficient exposure to AI tools and a lack of skills in using them effectively—posing a “second level” digital divide.

5) Challenges to Current HEI Practices.

The rapid advancement of Generative AI has raised concerns about the potential displacement of human educators and the subsequent undermining of learning outcomes [40]. The influence of ChatGPT on teachers could manifest in various ways. One major concern is the potential alteration of student-teacher interactions, as students might completely rely on AI-generated content for learning, rather than seeking genuine guidance from teachers. This shift could result in a diminished human element – which many argue is highly critical – in the educational process. Another critical issue stemming from the growing prevalence of ChatGPT in higher education is its potential impact on academic integrity. Educational institutions and educators have raised concerns about the increased likelihood of plagiarism and cheating [41]. These systems possess the capability to generate essays or complete exams based on specific parameters or prompts. Consequently, students might misuse these systems to submit assessments that are not the product of their own efforts [17, 42]. Such actions not only undermine the core objectives of higher education but also threaten to devalue academic degrees. This concern is particularly relevant in disciplines that predominantly rely on essay-based assessments [43]. An associated issue is the risk for some students to gain an undue advantage over others. By utilising AI systems to generate high-quality written assignments, these students may secure a competitive edge over their peers who do not use AI systems. This inequitable use of AI tools could compromise the fairness and integrity of academic evaluations. Distinguishing between a student’s original writing and responses generated by AI systems can be challenging [44]. Academic staff may struggle to accurately assess a student’s comprehension of the material when they rely on a chatbot application to provide answers to their questions. This difficulty arises because students’ work from AI-generated content may not genuinely represent the student’s actual level of understanding, potentially compromising the assessment process. Existing plagiarism-detection tools may not effectively identify cases of academic misconduct involving ChatGPT-generated text [45, 46]. Although Turnitin announced that the company has developed an AI-detection model that claims to be able to identify 97% of GPT-authored writing, the actual effect would still require validations in practice. For example, a recent research paper suggests that GPT detectors tend to misclassify non-native English writing as AI-generated [47]. In response to these academic integrity concerns, some institutions worldwide have banned the use of ChatGPT, while others have adapted their assessment methods to focus on in-class or non-written assignments. Current plagiarism-detection tools, mainly designed to recognise similarities, may have difficulty identifying instances where ChatGPT has been used. The unique nature of AI-generated content, which may not directly mimic pre-existing sources, adds to the challenge. In light of the potential risks to academic integrity, numerous institutions have implemented prohibitions on the use of ChatGPT. Alternatively, other institutions have chosen to modify their assessment methodologies, placing a higher emphasis on in-class participation, presentations, or other non-written tasks. Through these measures, they aim to mitigate the possible misuse of AI-generated content and uphold a high standard of academic integrity.

4 Discussions

McMurtrie [48] contends that instruments such as ChatGPT will inevitably integrate into daily writing practices, much like calculators and computers have become essential in math and science. Indeed, when handheld calculators first emerged, there was significant apprehension about the potential decline of people’s numeracy skills. Today, however, they are indispensable for teaching mathematics and can be found on every smartphone. Students and academics routinely use spell and grammar checkers, thesauruses, and Wikipedia. As tools like ChatGPT are becoming increasingly pervasive, with recent integration into Microsoft Office and online search engines, we must inevitably embrace their presence and utility. As such, Sharples [49] proposes involving students and educators in the development and utilisation of AI tools to enhance learning instead of prohibiting students from using them. HEIs need to build the capacity for greater Generative AI incorporations. The following four key strategies should therefore be considered.

1) Establishing Clear Policies for Generative AI at HEIs.

HEIs need to update academic integrity policies and/or honour codes that include the use of AI tools. Establishing clear guidelines for Generative AI use at HEIs is crucial to promote responsible and ethical applications of such technology in academic settings. HEI teaching staff should explicitly state in the course syllabus or assessment guidelines how and in what ways Generative AI tools can be used, provided students adhere to specific instructions and guidelines. For example, students must critically evaluate AI-generated content for relevance and accuracy, properly citing sources and acknowledging any AI assistance in their work. Moreover, submission requirements may include a list of their queries with AI and a reflective write-up. Assessments involving AI-assisted tasks should incorporate presentations to verify comprehension and understanding. It is essential to maintain a balance between human-generated content and AI assistance while adhering to the university’s academic integrity policies. HEI teaching staff play a pivotal role as gatekeepers, taking appropriate actions as needed. By implementing these specific policies and guidelines, HEIs can encourage the responsible use of AI tools while maintaining a high standard of academic integrity and allowing students to benefit from the efficiency and convenience offered by Generative AI systems.

2) Revisiting Assessment in Higher Education.

The issues involved in ChatGPT provide a golden opportunity to revisit assessment in higher education. Perhaps future assessments should focus on higher levels of Bloom’s taxonomy, such as application, analysis, and creation, as suggested by Stutz et al. [50]. Meanwhile, HEI teaching staff can implement various strategies to address the criticism surrounding the use of ChatGPT and other AI language models in higher education. By focusing on the positive aspects of these tools, teaching staff can create an environment that fosters skill development, collaboration, and academic integrity. One approach to counter the criticism is to emphasise the role of ChatGPT and similar AI tools as supplementary resources that enhance students’ learning experiences. HEI teaching staff can design assessments that require students to demonstrate critical thinking, problem-solving, and communication skills while also utilising AI assistance for research and idea generation. This balanced approach ensures that students apply their knowledge and skills while benefiting from technological advancements. HEI teaching staff can also promote proper citation and referencing practices to ensure academic integrity when using ChatGPT or other AI language models. By teaching students how to accurately acknowledge the sources used in their research, including AI-generated content, academic honesty is maintained, and the validity and reliability of research are supported. Another strategy to address criticism is to integrate AI language models into the curriculum to encourage collaborative learning. By incorporating ChatGPT as a tool for brainstorming, idea generation, or even providing feedback on drafts, students can engage in group discussions or presentations and learn from each other. This collaborative approach enables students to harness the benefits of AI assistance while also fostering critical thinking and independent learning. Moreover, HEI teaching staff can implement open-ended assessments that foster creativity and originality, discouraging overreliance on AI language models like ChatGPT. By challenging students to think critically and independently, teaching staff can ensure the development of essential skills required for academic and professional success.

3) Teacher Professional Development.

As Generative AI becomes more prevalent in education, there is a growing need for teacher professional development (TPD) that equips HEI teaching staff with the knowledge and skills required to effectively integrate AI into their teaching practices. Educators need support in identifying and implementing pedagogical strategies that effectively integrate AI tools into their teaching practices. TPD programs should facilitate discussions and workshops on innovative ways to use generative AI to enhance curriculum delivery, student engagement, and assessment. Given the ethical challenges associated with generative AI, TPD should address issues such as data privacy, algorithmic bias, and fairness. Educators should be equipped with the knowledge to foster digital citizenship and ethical AI use among their students. TPD programs should emphasise the importance of ongoing evaluation of generative AI tools, including monitoring their impact on student learning outcomes and adjusting teaching practices accordingly. Educators should be encouraged to engage in reflective practices and share their experiences with their peers to promote continuous improvement.

4) Developing Student Literacy for Responsible Use of Generative AI.

It is essential to educate students about academic integrity policies and the consequences of academic misconduct. By promoting ethical standards, policymakers can help maintain the credibility of educational achievements and preserve the value of university education. It is also crucial to introduce students to the limitations of ChatGPT, such as its reliance on biased data, limited up-to-date knowledge, and potential for generating incorrect or fake information. HEI teaching staff should stress the importance of using high-quality sources while exercising caution with substandard sources, misinformation, and disinformation [51]. Encouraging a well-informed and discerning approach to research ensures the reliability of the information being incorporated into students’ work. HEI teaching staff could teach students to make good use of other authoritative sources to verify, evaluate, and corroborate the factual correctness of information provided by ChatGPT. This can be supplemented by promoting reading widely and voraciously to improve critical and creative thinking skills, as exposure to diverse perspectives and ideas fosters intellectual growth and stimulates innovation. HEIs should further encourage students to develop digital literacy and master AI tools, as suggested by Zhai [38], since such mastery can provide a competitive edge in the job market and enhance employability. Integrating AI tools into the writing process can foster creativity and enhance critical thinking, as long as students use these tools as a means to improve their learning in an appropriate manner, rather than merely copying and pasting text. Moreover, HEI leaders should encourage incorporating generative AI into curricula to guide student learning with these tools. Additionally, providing opportunities for students to practice using generative AI like ChatGPT to solve real-world problems effectively can demonstrate their practical utility and help develop a deeper understanding of the capabilities and limitations of such tools.

5 Conclusion

Generative AI, particularly ChatGPT, has shown tremendous potential to revolutionise higher education. To fully realise its potential in enhancing learning and teaching, it is crucial to ensure ethical, responsible, and inclusive use of generative AI in higher education settings. This study reviews the opportunities and challenges of Generative AI in higher education. A total of four key strategies for effectively navigating the integration of generative AI within HEIs are subsequently given. By adopting these suggested approaches, HEIs can harness the transformative power of generative AI while fostering a responsible and informed academic environment that benefits both students and educators alike.