Keywords

12.1 Ethics in AI

AI ethics aims to guide the creation and ethical application of AI technology. As AI becomes more embedded in everyday life, organizations are working to establish AI codes of ethics.

An AI code of ethics, also known as an AI value platform, is a policy declaration articulating AI’s intended to function in human progress (Lawton and Wigmore 2021). The goal of creating a code of ethics for AI is to provide people with pointers to follow when they need to make a moral choice about employing AI.

The science fiction writer Isaac Asimov anticipated the problems posed by autonomous AI agents long before they were developed, and he wrote “The Three Laws of Robotics” to mitigate those threats. The first rule of Asimov’s code of ethics is that robots must not intentionally hurt or allow harm to come to humans by failing to act. Following the first two rules, robots must take precautions for protection by the third rule. In the absence of explicit violations of the first law, robots are required by the second law to follow the instructions of their human masters (Lawton and Wigmore 2021).

Groups of professionals have responded to the rapid development of AI over the past 5–10 years by creating safeguards to defend against the risk that AI poses to humans. Max Tegmark, a cosmologist at MIT, Jaan Tallinn, co-founder of Skype, and Victoria Krakovna, a researcher at DeepMind, formed a nonprofit institute to study these issues. Asilomar AI Principles are a set of 23 standards developed by the institute in collaboration with AI researchers, developers, and academics from various fields (Lawton and Wigmore 2021).

The director of KPMG’s Digital Lighthouse, Kelly Combs, has stated that “it is imperative to include clear guidelines on how the technology will be deployed and continuously monitored” must be included in any AI code of ethics (Lawton and Wigmore 2021). These regulations should call for safeguards to prevent algorithmic bias in machine learning, ongoing algorithmic and data drift monitoring, and identifying persons responsible for training algorithms and their data sources.

12.2 Ethical Implications of Artificial Intelligence

Responsibility is a source of debate when discussing deep learning and other forms of AI. The trainer in a conventional classroom setting is responsible for verifying the veracity of the training materials before they are distributed to students. If not, they are liable.

In contrast, with AI, the inventors of the algorithms do not also serve as the content providers (Hauptfleisch 2016). This creates difficulty in an accident, as machines cannot be held responsible like people.

Mustafa Suleyman, CEO of Google DeepMind, discussed the duty of designers and technologists to think critically when constructing such systems at the Disrupt London tech conference (Hauptfleisch 2016). He warned that developers could unwittingly embed their prejudices into the technologies they create.

As a test, Microsoft launched a chatbot on Twitter. The more people tweeted at it, the more it learned. The tweets moved from “people are cool” to “Hitler was right” in less than 24 h, demonstrating the influence our shortcomings may have on AI systems.

Data processing systems do not learn from the algorithms but rather from the data they process (Hauptfleisch 2016). Liability for algorithm developers would be unreasonable from a legal standpoint. In addition, this could be a severe issue in sectors where safety and compliance are paramount, such as a school setting.

In order to guarantee that the data being processed by AI systems is accurate and fair, it is up to the humans in charge of those systems. To the same extent that educators must provide students with reliable information.

There will be deep-seated changes in both the quality and price of education. There is no denying the efficacy of deep learning in online education. On the other hand, unintended repercussions may be challenging to anticipate and manage.

We must take chances for the future, just like our ancestors did, because of the exponentially increasing benefits of learning and information communication advances.

12.3 Ethical Issues of AI in Education

Let us have a deep dive into the ethical issues of AI in education.

The aims of education. Reiss and White (Reiss and White 2013) argued that education’s overarching goal should be to foster human flourishing, but, in a larger sense, the nonhuman environment’s flourishing. This expansion is crucial because it comes when the human race becomes increasingly aware of its devastating effects on the planet through deforestation, climate change, and species extinction. Using AI as a learning tool highlights the importance of reevaluating education’s core goals.

The goal of education might be more broadly conceived as contributing to the flourishing of each student (Reiss 2017). Although it is not incompatible with helping students gain strong knowledge (F.D.Young 2008)—the kind of knowledge that students would not learn without schools—establishing that human flourishing is the aim of education is not the same. The claim that education should promote human flourishing begins with the idea that there are two parts to this goal: preparing students to live flourishing lives for themselves and assisting others to live flourishing lives.

Schools should provide students more freedom to choose activities that interest them to facilitate their maturation into self-reliant adults (Reiss 2021). In particular, one may argue that education’s primary purpose is to ready students for a life of self-directed, whole-hearted, and fruitful participation in meaningful relationships, activities, and experiences. To achieve this goal, it is necessary to introduce students to many paths they can choose, with the understanding that students will have varying degrees of competence in making such “choices.” Teachers, like parents, are likely to have a more substantial role in guiding young children. Both schools and parents have a responsibility to help their children develop the skills they will need to make decisions on their own.

In his Nicomachean Ethics and Politics, Aristotle emphasizes that humans should (can) enjoy thriving lives; this is one of the earliest ethical ideas (Reiss 2021). There are various conceptions of what makes a life triumphant. Maximizing positive emotions while reducing negative ones is the goal of a Benthamite hedonist. From a more mundane point of view, it could be linked to financial success, public acclaim, material possessions, or the gratification of one’s most profound, fundamental need. Each of these explanations has its flaws. One issue with achieving one desire is that it can lead to unhealthy living, such as a person devoting their entire life to keeping their bedroom neat (Reiss 2021).

The concept of Bildungsroman expands our understanding of success in school. This German term describes the stages of development during which a person comes into their own as an individual while also becoming a contributing part of society. This idea is exemplified by the vast body of literature known as Bildungsroman (Reiss 2021) (often translated as “coming-of-age” novels), in which a protagonist undergoes a moral and psychological transformation from childhood to adulthood (examples include Candide, The Red, and the Black, Jane Eyre, Great Expectations, Sons and Lovers, A Portrait of the Artist as a Young Man, and The Magic Mountain).

This has implications for a future where artificial intelligence plays a more significant role in education because, while it is true that all teachers should reflect on their goals, this is especially important when the teacher in question lacks self-awareness and the ability for reflexivity and questioning, as is the case when AI provides the teaching. There is also a risk that AI education systems may prioritize a restricted conception of education, where knowledge acquisition or a narrow set of skills will become dominant due to the historical focus on disciplines like mathematics in computer-based learning. Creating effective AI packages for teaching literature may be more challenging than teaching physics without assuming a Dead Poets Society perspective. The overarching goal of our pedagogy is to help students develop into engaged, well-informed citizens. This involves nudging people toward civic engagement on freedom, individual autonomy, equitable regard, and cooperation at the state, national, and international levels out of a concern for the common good. Moreover, young people need to know what these attitudes require, such as an awareness of the complexities of democracy, the range of perspectives on it, and how it might be applied to their social context (Reiss 2017).

12.3.1 Possible Impacts of AI on the Working Conditions of Educators

Students are not the only ones whose lives will be altered by the rise of AI in the classroom. The ramifications for (human) educators are impossible to foresee. Every educator would welcome the possibility that AI might lead to more motivated students so that they could devote less time and energy to classroom management difficulties and more to enabling learning. However, the privacy problems and growing surveillance culture may also apply to educators. Once upon a time, a teacher’s classroom was their haven. Teachers may discover they are under as much scrutiny as their students as data on student performance and achievement grow (Reiss 2021). Teaching may become considerably more stressful than it is, even if AI has little or no effect on the number of needed teachers.

It would appear that a teaching assistant’s job is more precarious than that of a teacher. The shocking conclusion, well supported by statistical analysis, reached by Blatchford et al. (Blatchford et al. 2012) in their landmark study evaluating a significant expansion of teaching assistants in classrooms in England—an expansion costing about £1 billion—was that students who had the most help from teaching assistants performed much worse academically than their peers who had gotten less assistance. Subsequent research has shown that this result can be turned around if teaching assistants provide adequate resources and instruction (Webster et al. 2013). However, the case for a significant number of teaching assistants in a post-AI world seems weaker than the case for a high number of teachers in the future.

12.3.2 Special Educational Needs

Students with special educational needs (SEN)—a catch-all category that includes attention deficit hyperactivity disorder (ADHD), autism spectrum disorder (ASD), dyslexia (LD), dyscalculia (DY), and specific language impairment (SLI), as well as less well-defined categories like moderate learning difficulties (MLD) and learning disabilities (LD)—should benefit significantly from the potential of AI to tailor the educational offer more precisely to a student’s needs and wishes (Astle et al. 2018). In a standard classroom of, say, 25 students, students with special needs are statistically guaranteed to have a smaller proportion of the material covered in any given lesson be directly applicable to them than students without special needs. Naturally, this is true for students labeled as gifted and talented (G&T) and those who find learning (in general or for a specific subject) much tougher than most, taking substantially longer to progress.

However, to be clear, while some students in school may need a binary decision of whether they are SEN or not or if they are G&T, these are not dichotomous variables; instead, they fall on continua. AI has many benefits, one of which is that it does not have to resort to oversimplified categorizations that are sometimes necessary for traditional teaching methods (funding decisions and allocation of specialist staff).

The number of students with SEN is hard to pin down. Definitions have shifted, but in England, the average is still around 15%. Even with this rudimentary categorization, it is evident that roughly one-fifth to one-sixth of students fall into the SEN or G&T categories. The percentage of students who are G&T is usually substantially more petite, with estimates ranging from 2% to 5%. However, there are still many other students who, in the eyes of any parent, have special needs while not falling into formal categories (Reiss 2021).

It is not hard to picture a future when AI aids in this kind of education but does not entirely replace human instructors. Indeed, it appears likely that AI will be of particular use when it supplements human teachers by offering access to topics (even whole disciplines) that individual teachers cannot, so expanding the educational offer.

12.3.3 Student Tracking

In the West, we often find ourselves shaking our heads at how the combination of biometrics and AI in some nations leads to the ever-stricter tracking of people. Betty Li, at age 22, is a student at a school in northwest China. She must pass through scanners to enter her dorm, and facial recognition cameras above the blackboards in class monitor her and her classmates’ participation (Xie 2019). Cameras of this type are being utilized in some Chinese secondary schools to track and record the emotional states of their students. Although such information is not being used now, this may change as technology improves.

According to Sandra Leaton Gray’s writing (Gray 2019), she has nightmares about how artificial intelligence and biometrics will merge in the classroom. She argues that publishers already know how long students spend on each page and which pages they skip because of the widespread use of digital textbooks in schools. As she continues:

In the future, companies may even be able to observe students’ faces as they read the material or link their responses to online questions throughout the course to their final GCSE or A-Level scores, mainly if the same parent company developed the test. While this is not happening, it is theoretically feasible. (Gray 2019)

In 2019, Leaton Gray (Gray 2019) raised valid concerns about integrating AI and biometrics. Technology studies frequently repeat the cliche that technologies are neutral, at best, and often downright harmful, depending on their application. This could improve education, but imagining the nightmarish effects of widespread monitoring in the panopticon style is simple.

12.4 The Ethical Framework for AI in Education

In the summer of 2018, Sir Anthony Seldon, Priya Lakhani OBE, and Professor Rose Luckin proposed the Institute for Ethical Al in Education. Their goal was to create an ethical framework to ensure all students could reap the most significant possible benefits from using AI in the classroom while being shielded from potential dangers.

After extensive stakeholder engagement, the Institute is prepared to provide The Ethical Framework for Al In Education (The Institute for Ethical AI in Education 2020). The Framework is based on an agreed-upon ideal of ethical AI in education and will ensure that all students can reap the most significant possible benefits from AI in the classroom while being safeguarded from its potential dangers. Those responsible for purchasing and implementing AI-related educational resources must find a helpful framework.

Leaders and practitioners in educational settings are critical to maximizing the benefits of Al for students while mitigating any associated hazards. To better incentivize providers to develop Al ethically and with learners’ best interests in mind, decision-makers can use the Framework throughout the procurement phase to help guarantee that only ethically created resources are used and purchased.

The AI Learning Framework gives educators the tools to steer AI development, acquisition, and deployment for students’ benefit. However, it is not their job to ensure that students get the most out of Al in the classroom. Those responsible for creating Al materials must guarantee that their creations adhere to pedagogically sound standards and do not unfairly target any one demographic of students.

The Framework provides a reliable technique for shielding students from potentially harmful artificial intelligence (AI) resources (The Institute for Ethical AI in Education 2020). The Framework integrates the ethical expectations of individuals creating and developing AI systems, eliminating the need for a separate framework. Several places in the Framework suggest that decision-makers request pertinent information during procurement to ensure that AI resources are created ethically. The Institute expects that procurement decisions will be quickly affected if organizations involved in designing, developing, and providing AI resources cannot provide the information the Framework requires.

Designers are also tasked with upholding local data protection rules and standards, such as the Information Commissioner’s Office’s Age Appropriate Design Code (or “Children’s Code”). In addition, The Institute recommends that by September 2021, all providers of AI goods and services used in schools comply with the standards in The Ethical Framework for AI in Education (The Institute for Ethical AI in Education 2020). These groups are urged to consider the data procurers will require and take preventative measures to guarantee they have all the data they need to show that their resources were developed with ethics in mind.

The Institute for Ethical AI in Education believes educational reforms are necessary to ensure that all students receive the most advantage from artificial intelligence (AI) in the classroom (The Institute for Ethical AI in Education 2020). Artificial intelligence (AI) can potentially solve many systemic issues plaguing education today, from a thin curriculum to ingrained social immobility. With the help of AI, nations may be able to abandon their antiquated evaluation methods, paving the way for universal access to low-cost, high-quality lifelong education.

While it is outside the purview of the Institute to propose a design for how these reforms could be supported using AI, it is evident that not all students will benefit from the reforms if the digital divide is not overcome promptly and decisively.

The effects of digital exclusion were made starkly apparent during the day schools were closed because of COVID-19. The worst effects were seen by students who lacked proper access to technology and/or the Internet. Many of the most impoverished children and teenagers’ severe academic decline was preventable. If the Institute’s findings are considered, it is less likely that the same mistakes will be made again.

The epidemic could end up being a game-changer in the history of education in the long run. Assuming that all people have access to the required hardware, infrastructure, and connectivity, societies can hope that AI’s ethical and purposeful use can help them overcome massive educational disparities and unleash the full potential of all students.

The Institute for Ethical AI in Education strongly recommends that all governments implement policies to ensure all students have access to a device and Internet connection (The Institute for Ethical AI in Education 2020). Only then will students everywhere reap the full benefits of AI in education.

12.5 Investigating the Moral Implications of AI for K-12 Classrooms

Due to COVID-19, online learning has become more commonplace in K-12 classrooms in recent months. Everything from checking your email to using a search engine now uses artificial intelligence. It also exists in the classroom, for example, through personalized learning or assessment systems. However, what about the moral and ethical repercussions of this?

Two researchers from Michigan State University investigated the potential benefits and drawbacks of using artificial intelligence in elementary and secondary schools. Their findings are presented below.

Lead author of the paper published in AI and Ethics and doctoral candidate in the College of Education’s Curriculum, Instruction, and Teacher Education (CITE) program, Selin Akgun, elaborated on the benefits of AI in the classroom: “Artificial intelligence can help students get quick and helpful feedback and can decrease workload for teachers, among other affordances.” Some educators promote student-to-student dialogue through social media, while others supplement lesson delivery with online platforms in hybrid or multi-level settings. While numerous potential benefits exist, we also wanted to address potential drawbacks.

To help educators make the most of AI in the classroom, Akgun and Christine Greenhow have outlined four key areas, as shown in Fig. 12.1, to explore (Akgun 2021).

  1. 1.

    Privacy. Users of many AI systems are asked to agree to the system’s usage and access to personal data in ways they might not fully comprehend. Think about the “Terms and Conditions” you are asked to agree to before downloading any new software. In some cases, users may click “Accept” without fully understanding the implications of doing so. On the other hand, individuals can learn about the more nuanced ways the software can use their data, such as by understanding their location, provided they read and comprehend it. Further, others say that parents and children are being “forced” to share data if platforms are mandated as part of the curriculum.

  2. 2.

    Surveillance. A user’s actions can be monitored by AI algorithms, leading to a tailored experience. Some examples of such systems exist for analyzing student performance and determining where improvement is needed in the classroom. Monitoring and tracking students’ online chats and behaviors also may limit student participation and make them feel unsafe taking ownership of their ideas.

  3. 3.

    Autonomy. The reliance on AI systems on algorithms such as estimating a student’s test score might make it hard for students and educators to feel they have control over their work. Experts say this phenomenon “raises problems about fairness and self-freedom.”

  4. 4.

    Bias and discrimination. According to academics, every time an algorithm is developed, it is accompanied by data representing society’s historical and systemic biases, which ultimately morphed into algorithmic biases. One way in which these can manifest in AI is through the translation of words and phrases based on gender (“She is a nurse,” but “he is a doctor”), for example. Different AI-based platforms exhibit varying degrees of gender and racial bias, even though these prejudices are unintentionally built into the underlying algorithm.

Fig. 12.1
A diagram lists some key areas when using A I. The areas are privacy, surveillance, autonomy, and bias and discrimination.

Key areas to explore when using AI in the classroom

12.6 Artificial Intelligence in Higher Education: Ethical Questions

Concerns about ethics typically center on the potential impact of AI systems on various demographics, as well as on educational ideals and how such values might be affected by the technology (Zeide 2019).

  • The Black Box. Understanding what is happening within AI systems is challenging due to the high degree of complexity in which they operate. The premise is that computers can perform tasks beyond human thought’s capabilities. For this reason, simplifications are generated when we attempt to explain the underlying mechanisms at play.

  • Invisible Infrastructure. By deciding what information to include in admissions, financial aid, and student information systems, these AI technologies set the ground rules for what matters in the higher education sector. Because of this, the supporting structures will be hidden from view. The people responsible for building the infrastructure do not take this into account explicitly. A prime instance is when instructional software mandates concrete goals for users’ progress. Indeed, that is a central tenet of any sound educational or institutional plan. However, many teachers fail to recognize that implementing new technologies is similar to mandating a new set of standards regarding student performance.

  • Authority Shifts. Most of the time, a private company is responsible for gathering and presenting the data. Therefore, the business is responsible for making many decisions that will have far-reaching and subtle effects on the fundamental qualities of various systems. These private corporations may have less of an incentive to be transparent with the people who matter most to schools, the kids. The shift in power and the resulting changes in motivation necessitate careful consideration before implementing these technologies.

  • Narrowly Defined Goals. The data-driven applications that are now available tend to support particular aims. In other words, these systems cannot function unless they explicitly define what constitutes an optimal outcome. One example is getting a well-rounded education instead of focusing on a particular area. There will be less room for improvisation than how people currently engage in classes and college campuses. More abstract educational aims, such as developing citizens capable of self-governance or encouraging creativity, may be pushed aside to optimize learning outcomes. The latter are things that, strictly speaking, could be represented in data, albeit through extremely imprecise approximations. Which means they might not be tracked or given any importance.

  • Data-Dependent Assessment. The data-driven evaluation also has some similar issues. Data collection tools focusing primarily on online interactions may miss subtleties that human educators can pick up. Let us pretend a student gives the wrong response to a question. The machine will record one wrong answer. However, if the teacher knows the student is sick with a cold, she may overlook the mistake.

  • Divergent Interests. The interests of universities and tech companies and those of universities and their students may not always align. This might lead to hasty product launches or an overemphasis on scaling at the expense of ensuring that the highest-quality platforms are being used and that their effectiveness is being measured in meaningful ways. Not all tech creators have this issue, but it is essential to remember that some do. Those working in information technology are motivated to create systems that utilize ever-increasing amounts of data to produce what they believe to be ever-more accurate outcomes. Demonstrating the value of their systems in this way is a key objective.

    Differences in priorities across schools and students are more pronounced and less noticeable. It is acceptable if the school takes action to fix the problem or prevent it from happening. On the other hand, it may not always be in the best interest of the institution’s administration to do so. Predictive analytics and early warning systems to identify and help at-risk students are commonly cited as a technique to increase student retention rates.

    A few years ago, the president of Mount St. Mary’s University in Maryland was widely known to have used a predictive analytics test to identify at-risk students. The plan was to push them out the door before the university had to turn in its enrollment data to the federal government, boosting its retention rates and academic reputation in the process. The president claims his idea will benefit both the school and the students by reducing money on tuition. There is no denying that this questions the very nature of the educational and institutional endeavor.

12.7 Elements to Consider and Questions to Ask

To achieve the best and most equitable AI tool implementation, several factors (as shown in Fig. 12.2) must be taken into account (Zeide 2019):

  • Procurement. To fulfill contractual responsibilities to deliver student data, it is essential to consider the technologies and companies most relevant to your specific student body. When choosing a company to work with, be sure they have a good track record of responding to customer concerns.

  • Training. The personnel responsible for implementing and using these tools must be trained and shown their potential and limitations.

  • Oversight. Establish a system to evaluate the tools’ efficacy regularly, focusing on whether they benefit specific student subsets or produce inflated results. This is challenging but crucial, as these tools rapidly become obsolete.

  • Policies and Principles. Plan how your organization will adopt analytics-driven tools and develop guiding principles for how these will be implemented.

  • Participation. Collect feedback from students and teachers about the problems they are experiencing and the improvements they would like to make. The messiness and potential for controversy associated with this stage prevent many people from taking it, despite its benefit in the long run.

Fig. 12.2
A diagram lists the factors to be considered to get the best with A I. The factors are procurement, training, oversight, policies and principles, and participation.

Factors to be considered to achieve the best and most equitable AI tool implementation

The Educational Technology Leadership Committee at the University of California created some of the most robust policies and ideas in this area in 2015 (Phillips and Williamson 2019). The committee listed and explained the six guiding principles: ownership, ethical use, transparency, freedom of expression, protection, and access/control. The committee says security providers should benefit from learning more about data privacy policies in the following areas: data ownership, usage rights, opt-in, interoperable data, data without fees, transparency, service provider security, and campus security (University of California: Learning Data Privacy Principles and Recommended Practices 2016).

Finally, six crucial issues should be asked for a successful AI application within higher education (Zeide 2019):

  • For what purposes does the information exist? If you are implementing the systems correctly, you cannot simply look at a red, green, or yellow light to determine whether or not students are succeeding.

  • Which choices are we missing out on? This involves not only categorization and visualization but also computational choices.

  • When it comes to the content, who has the reigns? Is the fault with your system or with the company that made it? Do your teachers feel OK with that? How at ease are you with that?

  • How do we verify results regarding their efficacy, distribution, and positive and negative effects?

  • What gets lost with datafication?

  • Whose or what needs do we put first?

While these questions will not provide any magic solutions, they will provide a framework for thinking about the less evident characteristics of these systems.

12.8 Recommendations to Enhance AI Implementation in Education

Here are some recommendations (Jackson et al. 2021) based on research on various parties’ roles and potential contributions to policymaking. These recommendations urge various actors to increase the use of AI in schools in order to boost equality and learning results. The AI integration with various parties is shown in Fig. 12.3.

Fig. 12.3
A diagram lists the various stakeholders in an A I integration. The stakeholders are teachers schools and districts, A I developers, companies, researchers, and legislators.

AI integration with various stakeholders

12.8.1 Legislators

A scalable policy could be encouraged by establishing municipal, district, or state procedures to safeguard consumers from unfair trade practices. Companies and consumers who use AI technology may be subject to these procedures mandated by regulatory bodies.

12.8.1.1 Recommendations

Legislators are encouraged to:

  • In order to reasonably safeguard consumers from exploitative business activities, legislation must be enacted.

  • To ensure the safety of both companies and customers, legislation must be passed to establish a regulatory agency.

12.8.2 AI Developers, Companies, and Researchers

Companies working in artificial intelligence (AI) would do well to employ a diverse staff of developers and seek feedback from diverse auditors. As the AI product development process progresses, the end-users, such as school administrators, classroom teachers, students, and parents, will be educated on the potential benefits of AI in educational settings.

The Edtech Equity Project uses a certification mechanism to push for structural reforms inside an AI system. This methodology is used by several artificial intelligence (AI) developers and projects, such as the Digital Promise Product Certification, in collaboration with The Edtech Equity Project. Regarding the end-users capacity to understand the system’s output, AI systems that employ models like decision trees, support vector machines, and others are generally accepted as being interpretable.

Understanding how AI is trained would be facilitated using such systems and disclosing interpretability. Transparent testing procedures could stimulate research to create the AI system, keeping all parties informed [citation needed]. In addition, sharing the system’s success in the classroom requires transparent reporting.

12.8.2.1 Recommendations

AI developers and companies are encouraged to:

  • Hire an inclusive group of programmers, and gather input from diversity auditors (people who either identify with or have extensive experience working with the intended audience).

  • The end-users (administrators, teachers, students, and parents) must be informed of the data’s intended use.

  • Learn the methodology used to train AI and indicate any relevant limitations about target demographics and application settings.

  • To record and disseminate the underlying pedagogical strategy (to allow for appropriate classroom application and alignment).

  • Conduct studies and disclose findings openly on the effectiveness of ecological efficacy.

These suggestions call for concrete actions to be taken by policymakers and planners so that the gap between AI experts and those working in the field of education can be bridged. The current state of affairs [citation needed] is characterized by walls separating theory from the application. Separating the various aspects of AI research into their departments will increase openness and diversity. Ensuring academics work in tandem with developers to provide guidance on research threads and disseminate knowledge to education practitioners is an excellent first step toward closing the chasm between the two. By doing so, researchers’ results would be included in the product development process, ethical practices would be incorporated, and educators would be equipped with the knowledge and understanding necessary to make informed decisions about using artificial intelligence in the classroom.

12.8.2.2 Recommendations

AI researchers are encouraged to:

  • Collaborate with those creating AI to provide input on various research strands.

    • Products and their uses must be assessed to see if they produce the desired results.

    • Ethical product development.

    • Effects of scale and reach.

  • Share their findings by:

    • Networking with professionals in the field of education to disseminate cutting-edge methods and discoveries.

    • Collaborating with experts in research communication to create blogs, newsletters, and reports that everyone can read.

12.8.3 Districts, Schools, and Teachers

Without any other regulations, schools and districts are urged to enact and uphold rules that reasonably protect educators, students, and their families from predatory behaviors. This category may include but is not limited to Data Privacy and Use Practices. It will be crucial for these policies to incorporate strategies for better educating and preparing educators to use the new technology. Practitioners like district/school officials and educators are urged to grasp appropriate use through continuing training sessions thoroughly.

Teachers will understand who, what, where, when, why, and how data is collected, kept, utilized, and shared, as well as how to evaluate technologies and their ability to promote equitable educational practices and effectively integrate them. Both students’ and teachers’ privacy and the intended use of the data must be protected; thus, policies should be as detailed as possible. Teachers and parents should be allowed to forego specific data uses outlined in the policy. In the case of an AI system designed to improve teacher-parent communication, for instance, the collected data cannot be utilized to assess the effectiveness of the educator in question.

12.8.3.1 Recommendations

Districts and schools are encouraged to:

  • Educators, students, and their families need to be reasonably protected from predatory behaviors; therefore, policies should be developed and strictly enforced.

  • Spending money on training should include ongoing professional development for educators to learn about AI systems, ethical considerations, risks, and potential rewards.

Administrators are the most effective source of assistance for educators. Instructors must be familiar with the best practices for integrating technology into the classroom. Now more than ever, teachers have access to classroom orchestration technologies powered by artificial intelligence, which can enhance students’ opportunities for active learning. Students may be able to game these systems in some circumstances, as with the rudimentary models accessible in Autograder systems that conduct keyword searches in the background in the name of AI implementation [citation needed]. When all these considerations are considered, it becomes clear that AI is a potent weapon that must be used carefully. Educators and school leaders must work together for AI technology to be successfully implemented in classrooms. Educators and school leaders who employ AI in the classroom should be familiar with the best practices for using the tool. Facilitating this through training on technical developments and forming collaborations with administrators to promote equitable educational practices when employing AI in schools is a significant next step.

12.8.3.2 Recommendations

Teachers are encouraged to:

  • Identify when and how to deploy artificial intelligence.

  • Obtain and maintain current knowledge as technology evolves.

  • Understand how to assess whether or not the implementation of technological solutions in the classroom leads to more fair outcomes for all students.

  • Effectively and appropriately integrate it.

  • Have the support of district officials and administrators.

  • One must be aware of who is collecting, storing, using, and sharing data and the specifics of the data being collected.

12.9 Conclusion

Hold out against the mechanical hordes for the time being. Use caution and forethought when implementing AI projects, and remember that technology is not a silver bullet. Despite all the excitement, these AI technologies are still computers, despite what the media might have you believe. In some cases, they may even go wrong. All of these things have been made by humans. Business and government have a significant impact on shaping their morals. Rather than being objective, their information is shaped by precedents.