Keywords

Introduction

Cyberlearning is a recent branch of educational psychology that has increased in importance as new technologies have been developed and proliferate our classrooms (Montfort & Brown, 2013). The National Science Foundation has developed cyberlearning programs to fund exploratory and synergistic research projects that emphasize learning technologies for education and re-education of learners of all ages in science, technology, engineering, and mathematics. Cyberlearning involves the convergence of psychology, education, learning technologies, computer science, engineering, and information science. Given the similar rate of advances in the educational neuroscience over the past couple decades, there is a growing interest in interaction between neuroscience and education (Stein & Fischer, 2011). There are now dozens of laboratories around the world that have converged to investigate education questions using both cyberlearning and neuroscience approaches. Technological advances surround education, and educators regularly connect or disconnect from others via multifarious digital venues. While cyberlearning has called attention to the stimulating potential that these new technologies (and the research behind them) have to offer, less emphasis has been placed upon the moral and ethical issues that may result from the widespread use of the learning technologies and neuroscience. This chapter aims to offer a first attempt at discussing some of the ethical issues inherent in brain-based cyberlearning research and practice. It is important to note that this discussion will need to be expanded to include a wider sociocultural discourse. Brain-based learning technologies have the potential for both positive and negative change of not only understandings of humanity in general, but also specific and contextualized notions of personhood, free will, conscious experience, authenticity, and relatedness to others.

Ethics in Educational Technology

While most brain-based educational technologists are not philosophers, and few have extensive experience as ethicists, they often deal with moral issues and dilemmas. These range from the daily awareness of distributive justice as they consider the imbalanced allocation of technologies in schools to discussing and balancing the complex issues involved in educational neuroscience research with learning technologies. These situations are often challenging and some quite perplexing. In general, training in ethical issues typically involves a handful of courses (perhaps only one course) emphasizing codes of conduct and ethical principles developed initially by Beauchamp and Childress (2001). The content may include a discussion of the Nuremburg Code (Allied Control Council, 1949), the World Medical Association’s Declaration of Helsinki (1964), and the Belmont Report (1978). From the Belmont report (i.e., Ethical Principles and Guidelines for the Protection of Human Subjects Research), we find three principles that provide the foundation for many current ethical guidelines for behavioral research: respect for persons, beneficence, and justice (Office for Human Research Protections [OHRP], 1979). While there is some terminological variation used in these guidelines and codes, they include the following ethical principles: autonomy (i.e., free will or agency); beneficence (i.e., mercy, kindness, and charity); nonmaleficence (i.e., do no harm); and justice (i.e., fair distribution of benefits and burdens).

Attempts have been made by the Association for Educational Communications and Technology (AECT) to define ethical research and practice: “Educational technology is the study and ethical practice of facilitating learning and improving performance by creating, using, and managing appropriate technological processes and resources” (Januszewski & Molenda, 2007, p. 1). Furthermore, AECT’s TechTrends offers a column on various aspects of normative and applied ethics in educational technology (see for example Yeaman, 2016). Michael Spector (2005) proposed an Educratic Oath for educators that included: (1) restraining from acts that impair learning/instruction; (2) encouraging acts that improve learning/instruction; (3) acting in an evidence-based manner; (4) disseminating instruction principles; and (5) respecting individual rights.

Given that the Educratic Oath was not widely embraced, Spector (2015, 2016; Spector, Merrill, Elen, & Bishop, 2013) moved from principles to more general explication of values. Specifically, Spector (2016) argued for approaching ethical issues in the use of educational technologies to include five interrelating dimensions: values, principles, persons, context (e.g., school), and technologies. In addition to Spector’s five ethical areas, a brain-based cyberlearning approach to ethics needs to take seriously the advances in cognitive, affective, and social neuroscience that have the potential to revolutionize educational assessments (Parsons, 2015; Parsons, Gaggliolo, & Riva, 2017) and training using technology-rich environments (Immordino-Yang & Singh, 2011).

Perspectives from the Neurosciences on Cyberlearning Technologies

In the past decade, there has been a rapid increase in research from the neurosciences that relates the human brain’s neural mechanisms to the Internet (Montag & Reuter, 2017), social media (Meshi, Tamir, & Heekeren, 2015), virtual reality (Bohil, Alicea, & Biocca, 2011; Parsons et al., 2017; Parsons, Rizzo, Rogers, & York, 2009), and related technologies (Kane & Parsons, 2017; Parsons, 2016, 2017). To encourage the inclusion of research advances in cognitive, affective, and social neuroscience in the cyberpsychology domain, Parsons’s (2017) proposed a framework for combining neuroscience and cyberlearning for the study of social, cognitive, and affective processes and the neural systems that support them. Following Parsons’s brain-based cyberpsychology approach, a cyberlearning approach that draws from the neurosciences can be understood as (1) the neurocognitive, affective, and social aspects of students interacting with technology and (2) affective computing aspects of students interacting with devices/systems that incorporate computation. As such, a brain-based cyberlearning approach will be interested in both the ways in which educators and students make use of devices and the neurocognitive processes, motivations, intentions, behavioral outcomes, and effects of online and offline use of technology.

What are some key themes that have emerged from the neurosciences for a brain-based cyberlearning? First, there is emerging research that supports the long-held view of educators that thinking and learning are concurrently cognitive and affective processes that occur in social and cultural contexts (Fischer & Bidell, 2006; Frith & Frith, 2007; Mitchell, 2008). In the same way that affective neuroscientific evidence links student’s bodies and minds in processes of emotion, social neuroscientific evidence links students’ self-perceptions to the understanding of others (Immordino-Yang, 2008; Uddin, Iacoboni, Lange, & Keenan, 2007). The interactions between students and others results in a social extension of their cognitive processes. Likewise, the interactions among students, smart classrooms, and cyberlearning technologies serve to extend their cognitive processes. While students and educators behave in accordance with subjective goals and interests that develop over time as they interact socially, the values, judgments, and calculations made by technologies represent the data, algorithms, and system constraints that programmed by their developers (Immordino-Yang & Singh, 2011). Given that the parameters governing these calculations are often decided outside of interactions with the student (either beforehand or during postprocessing), there are concerns about the potential ethical implications of using these technologies.

Advances in cyberlearning technologies have heightened our awareness of the impact technologies have on the structure and function of the student’s brain. Along with these rapid developments is an increased need to grapple with the ethical implications of cyberlearning tools and discoveries. Although several reviews have been written to synthesize the growing literature on neuroscience and ethics in general (Clausen & Levy, 2015; Farah, 2012; Illes, 2017; Racine & Aspler, 2017), there is a dearth of discussion related to the ethical implications of brain-based cyberlearning research, theory, and praxes. A brain-based cyberlearning framework will evolve at the interface of the neurosciences, education, and technologies of the extended mind. Educational theories and praxes are being and will continue to be transformed by the neurosciences. The ethical issues facing a rapidly developing brain-based cyberlearning fall under at least two distinct types: (1) those inherited from other areas of ethics (e.g., neuroethics; Lalancette & Campbell, 2012) and (2) those that are unique to or generated by the field of cyberlearning and other more general areas of concern to mind, brain, and educational technologies (Stein & Fischer, 2011).

Extended Cognition

An additional component for our understanding of cognitive, affective, and social processes for cyberlearning is the notion that technology is an extension of our cognitive processes (Parsons, 2015, 2017). It is becoming increasingly apparent that the educational technologies used in schools have the potential to extend a child’s cognitive processes beyond the embodied cognition of their forebears (Parsons et al., 2017). Andy Clark and David Chalmers (1998) developed an “extended mind” theory, in which cognitive processes are understood as going beyond wetware (i.e., child’s brain) to educational software and hardware used by the child’s brain. This perspective allows for an understanding of the child’s cognition as processed in a system coupled with the child’s environment.

Clark and Chalmers describe the extended mind in terms of an extended cognitive system that includes both brain-based cognitive processes and external objects (e.g., technologies like tablets, iPads, smartphones) that serve to accomplish functions that would otherwise be attained via the action of brain-based cognitive processes acting internally to the human (Clark, 2008; Clark & Chalmers, 1998). They make use of a “parity principle” that states:

If, as we confront some task, a part of the world functions as a process which, were it to go on in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process. (Clark & Chalmers, 1998, p. 8)

From the parity principle, one can argue that if a process that happens in the classroom (external world) would readily be classified as part of the cognitive toolkit when it goes on in the student’s head, then it is, at least for that point in time, part of the cognitive process. Using the parity principle as a guide, Clark and Chalmers present a thought experiment using fictional characters Inga and Otto to demonstrate the parity principle. Both Inga and Otto are navigating to a museum. Inga can navigate via recall of directions from her internal brain-based memory processes. Otto, on the other hand, has Alzheimer’s disease. This requires Otto to depend on directions found in a notebook, which serves as an external navigation aide to his internal brain-based memory processes. Such extended mental processing can be understood as information-processing loops that spread beyond the neural. Clark and Chalmers assert the equivalence of neuronal memory and paper memory as information storage strategies in the case of Otto and Inga.

Paul Smart (2012) has applied the idea of extended cognitive processes to the specific sociotechnical context of the Web. The result is a “Web-extended mind,” in which the Internet serves as a mechanism that realizes human mental states and processes. Various examples can be found in the ways in which students regularly enhance their cognitive performance with various technologies (e.g., tablets and iPads). Students are able to store their memories using technologies. While a student may not be able to remember what the average daytime temperature in the winter is near the poles on Mars, the student, plus her technology, can recall that it can get down to −195°F (−125°C).

The potential for the extended cognitive processing perspective seems even more apparent with the advent of mobile technologies. Although early iterations of the Internet were bounded by wires, later iterations only had to be near a router. Today, with the influx and expansion of tablets and iPads in the classroom, the vast information base of the Internet is available to the student. The number of tablets and Smartphones found in schools are quickly approaching the point where billions of students will have access. Moreover, the technological assets of tablets and iPads offer several improvements to deliberations on externalization. Early metaphors emphasized external memory storage, iPads, and tablets connected to the Internet extend beyond memory assistants to robust mobile computation devices. In fact, mobile technologies connected to the Internet allow teachers and cyberlearning researchers to investigate the interactions of students as they participate with a global workspace and connected knowledgebases. Furthermore, access to the Internet may allow for interactive possibilities a paradigm shift in how we see student learning and the ways in which we understand the nature of students cognitive and epistemic competences.

It is important to consider the circumstances under which a device qualifies as a technology of the student’s extended mind. First, it is helpful to explore what is meant by the word “mind.” While a fully nuanced account of the term “mind” is beyond the scope of this chapter, a few words of clarification will be helpful to situate the notion of technology of the student’s extended mind in context. While the term mind is used liberally in this chapter, it is not with the intent of slipping into some version of substance dualism (i.e., there is brain-stuff and mind-stuff). Instead, a specific distinction is made between brain and mind, in which the brain is understood as a thing while the mind is understood as a concept. The aim here is to keep from mixing these ontological levels in a way that so often ends in muddling the relation between brain and mind. A way of considering this issue is to consider the mind as representing the full set of cognitive resources that the student deploys in the service of thinking. Thinking can be understood as reflective, algorithmic, and autonomous thinking (Stanovich, 2009). This approach comports well with the extended mind hypothesis because the idea of a “full set of cognitive resources” allows for additional contributions (in addition to the brain) to conceptions of mental processing. The extension of mental processes outside of the brain (e.g., technologies of the student’s extended mind) means that mental processes cannot be fully reduced to brain processes (Levy, 2007a; Nagel, Hrincu, & Reiner, 2016; Reiner & Nagel, 2017).

Technologies of the Student’s Extended Mind

What sorts of devices can be considered technologies of the student’s extended mind? One thing to keep in mind when answering this question is that not every algorithmic function performed by devices (external to the student’s brain) should be understood as a technology of the student’s extended mind. Instead, it is preferable to conceptualize technologies of the student’s extended mind as a fairly continuous interface between brain and algorithm in which the student perceives the algorithm as being an actual extension of her mind. For example, consider an updated version of context-based learning games like the ones developed by the MIT Media Lab in the early 2000s (Klopfer, Perry, Squire, Jan, & Steinkuehler, 2005; Mystery at the Museum, 2003). In Mystery at the Museum, the student take part in an indoor augmented reality simulation that is enacted through the Boston Museum of Science. The background narrative includes a burglary that occurred in a science museum, and the students are instructed to apprehend the burglar by playing the role of a biologist, technologist, or detective so that they can ascertain what was stolen and what methods were used during the robbery. Mystery at the Museum was implemented using Wi-Fi for short-range information acquisition and communication. For our updated version, we could have the students use the Global Positioning System (GPS) in a tablet. Visualize a 13-year-old boy Tommy who has been instructed on how to enter exhibits into the search engine of a tablet application that will show him the best route to destinations for the context-based learning game quest. Once he arrives at the destination, the augmented reality enabled tablet can be used interactively by Tommy to learn about science and to solve the mysteries of the fictional burglary. This tablet application is particularly helpful because it allows Tommy to not get lost, as many of the game destinations lead him to visit parts of the museum with which he was unfamiliar. Tommy has heard stories from his classmates that they are not sure that the GPS interface for the museum always leads to the right place. As a result, Tommy remains alert to his environment so that he can be sure that he makes it to quest destinations in the museum without problem.

Is Tommy’s GPS functioning as a technology of the student’s extended mind? While it is undoubtedly performing computations that are external to Tommy’s brain, the GPS in Tommy’s tablet is probably better considered as cognitive assistance. Why is this the case? The answer is that neither the algorithmic calculations nor Tommy’s use of them are integrated with Tommy’s cognitive processes. Now consider a different scenario in which Tommy has taken part in the context-based learning game several times over the course of a month. Even though he now has slightly more knowledge of the museum, he always uses the GPS in his tablet to navigate through the museum, and it has not failed him. At this point, when he enters an exhibit into the tablet application’s search interface and the route is presented on the tablet screen, he automatically follows it to the destination suggested by his tablet. The GPS is beginning to function as a technology of the student’s extended mind because Tommy has integrated its algorithmic output into the working of his mind.

Neuroethical Issues for Technologies Extending the Student’s Mind

What are the potential ethical implications of Tommy using a technology that extends his cognitive processes beyond his brain? One place to look for brain-based ethics is the relatively new discipline of neuroethics. Today, many ethical discussions about brain and technology interfaces are being discussed as neuroethical musings about the nature of the brain and the ways in which persons interact with technologies to make decisions. The discipline of neuroethics is often understood as twofold, with both the neuroscience of ethics and the ethics of neuroscience as two domains on inquiry. Herein, the main concern is the neuroscience of ethics and investigations of the digital self, values, beliefs, and motivations. While neuroethical issues for technologies of the extended mind have been discussed by a number of neuroethicists (see for example Heersmink, 2017; Heersmink & Carter, 2017; Levy, 2007a, 2007b, 2011; Nagel et al., 2016; Reiner & Nagel, 2017), they were first introduced in Neil Levy’s (2007a) paper that argued for the substantial implications of the extended mind hypothesis for neuroethics. From a neuroethical perspective, Levy argues that the parity principle (if a cognitive process that happens in the classroom would readily be classified as part of the cognitive toolkit when it goes on in the student’s head, then it is, at least for that point in time, part of the cognitive process) found in the extended mind hypothesis can be extended to an ethical parity principle for neuroethics.

Neuroethics focuses ethical thought on the physical substrate subserving cognition, but if we accept that this substrate includes not only brains, but also material culture, and even social structures, we see that neuroethical concern should extend far more widely than has previously been recognized. In light of the extended mind thesis, a great many questions that are not usually seen as falling within its purview—questions about social policy, about technology, about food and even about entertainment—can be seen to be neuroethical issues. (Levy, 2007a, b)

Levy offers two moral principles for neuroethics labeled as versions of the ethical parity principle that can be used for discussion of moral concerns about neurological modification and enhancement: (1) Strong ethical parity: given that the mind extends into the external environment (e.g., classroom), adjustments of external props (e.g., iPad; tablets; smartphones) used for cognitive processes have ceteris paribus (i.e., all other things being equal) ethical parity with changes in the brain; and (2) Weak ethical parity: changes of external props have ceteris paribus ethical parity with changes in the brain, to the exact extent to which a person’s explanations for deciding that brain changes are problematic can be transferred to changes of the environment in which it is embedded. Support for Levy’s ethical parity principle is drawn from Clark and Chalmers’s view that “in some cases interfering with someone’s environment will have the same moral significance as interfering with their person.”

Reiner and Nagel (2017, see also Nagel et al. (2016)) agree with Levy and present three issues have particular import for further discussion: (1) threats to autonomy from manipulations of technologies of a person’s extended mind; (2) threats to privacy by examinations technologies of a person’s extended mind; and (3) cognitive enhancements via technologies extending a person’s mind. In the following, there is a discussion of Reiner and Nagel’s manuscript as it applies to technologies extending the student’s mind. A fundamental feature of their first issue, autonomy, is that the autonomous student should not be unduly influenced when making decisions. It is important to note that decisions made by students are guided frequently by the contribution of others (e.g., teachers, peers, caregivers) and/or the books and materials that they read, as well as their physical environment (e.g., classroom, playground). As a result, some have updated traditional notions of autonomy (Beauchamp and Childress (2001) to relational autonomy (Christman, 2004; Mackenzie, 2010; Nedelsky, 1989). In the same way that establishing what influences are due and undue in the context of others can be a difficult task, so too can it be difficult to determine the influence of technologies that extend the student’s mind. Prior to this, it is worth considering Reiner and Nagel’s (2017) explication of the general features of algorithms that could impact the degree to which a influences are considered to be violations of autonomy. Nagel et al. (2016) argue for three important factors: (1) the algorithm’s persuasiveness in decision-making; (2) the gravity of the decision; and (3) the algorithm’s ability to identify the student’s preferences.

In terms of persuasiveness of technologies, violations to autonomy are apparent when decision-making is influenced (Verbeek, 2006, 2009). If the student is still able to participate thoughtfully in decision-making and can reflect on the situation, then the impact of the technology will not be considered to be a violation of autonomy because there is no impediment to self-regulation. For their next factor, the gravity (i.e., seriousness) of the decision is relative to the level of potential harm or benefit a student may experience that may result from a given decision. Hence, the lower the assumed potential costs or benefits, the lower the apparent seriousness of the decision. Finally, their third factor, ability to learn about student preferences is important. If a technology simply executes a set of preprogrammed directives, then there is less concern. On the other hand, if the technology can monitor and learn from student behaviors and preferences, then there is increased possibility that an autonomy infraction may occur. Given these factors, an extension of the GPS example (see above) can be offered to illustrate the relevant issues for a student.

An illustrative example of the neuroethical concerns for technologies of the student’s extended mind may begin with the GPS application for the museum on Tommy’s tablet described above. Recall that Tommy’s initial use of the GPS application involved vigilant attention to both the application and the environment to make sure that he could trust the functioning of the application and not get lost. Here the tablet application is not functioning as a technology of the student’s extended mind because, while it is performing computations that are external to Tommy’s brain, the GPS in Tommy’s tablet is probably better considered as cognitive assistance.

Consider another situation, in which Tommy has been using the tablet application for a couple weeks, and the relationship between Tommy and the tablet app has grown more intimate—Tommy now integrates its algorithmic output into the working of his mind while traveling both inside the museum and around his neighborhood (e.g., to and from school, as well as to and from the locations of various extracurricular activities). Tommy is continuing his training in the museum and while working on an assignment that requires that he travel to an exhibit, he hears alerts from the tablet as he passes a sign advertising the museum’s constellation of eateries (on the first floor, right across from the Museum Store); and alerts chime again when the museum’s eateries are just up ahead.

Here, the situation has changed as the algorithms have learned Tommy’s preferences and are attempting to influence his actions. Moreover, the algorithm from the tablet GPS application may increase its level of suggestion by “asking” Tommy whether he would like to take a moment to get something to eat, or perhaps shop in the museum store (right across from the museum’s eateries). While Tommy may recognize that he needs to complete his assignment (continue his quest to solve the fictional burglary mysteries), he reasons that little harm would come from stopping to get something to eat and perusing the gift shop. Here, one finds a clear effect of the technology on Tommy that was influential enough to cause an alteration of his second-order desires to complete his assignment. Most likely, parents and teachers (as well as ethicists) would view this as undue influence. While the influence is relatively trivial, this scenario reflects a violation of autonomy.

This violation becomes much more pronounced when one considers the fact that the very same algorithm that has become an extension of Tommy’s mind is also an extension of the mind of the corporate entity that designed the tablet application. Perhaps the corporate entity was paid by vendors at the Café and Museum store for directing Tommy to them. Such potential conflicts of interest muddy the ethical waters when attempting to ascertain the extent to which a technology of the student’s extended mind has resulted in a violation of autonomy.

Cognitive Enhancement

Another area of concern for cyberlearning ethics is the issue of using advanced technologies to enhance cognitive abilities (Farah et al., 2004; Lalancette & Campbell, 2012; Parens, 2000). Developments in scientific knowledge are promising to enhance students’ cognitive performance, memory, and/or or productivity through new applications of neuropharmaceuticals and/or possible technological advances (Forlini, Gauthier, & Racine, 2013). Cognitive enhancement refers to the capability of achieving psychological enhancements beyond what is needed to maintain or restore good health, such as modifications to memory and/or executive functions (Farah et al., 2004; Juengst, 1998). As a result, the widespread use of cognitive enhancers has led some to conclude that cognitive enhancement is now a socially accepted practice (Berg, Mehlman, Rubin, & Kodish, 2009; Farah et al., 2004; Singh & Kelleher, 2010), there are increasing calls for discussions of the ethical issues surrounding the use of biomedical techniques to enhance cognition (Gaucher, Payot, & Racine, 2013).

Students are increasingly using prescription drugs to cognitively enhance their academic performance (Howard-Jones, 2010; Maher, 2008; Poulin, 2001; Wilens et al., 2008). The so-called “smart pills” are nootropics (i.e., neuropharmaceuticals) that were originally established to treat neurodevelopmental and other brain-based disorders. These nootropics have started making their way into schools because healthy (typically developing) students believe that they can use them to enhance memory (piracetam), wakefulness (modafinil), and attention (methylphenidate/Ritalin).

In an article exploring the ethics implications of cognitive enhancements in students, Singh and Kelleher (2010) urged professional medical associations to establish policy statements related to bringing neuroenhancement into primary care. One example can be seen in the American Academy of Neurology’s recently development and publishing of a position statement regarding the ethics of pediatric enhancement within the patient–parent–physician relationship (Graf et al., 2013). The decision of the statement was that physicians should not prescribe cognitive enhancers to children or adolescents. They based their decision on the fiduciary responsibility of physicians toward their pediatric patients.

An obvious ethical challenge to education is that the non-clinical use of nootropics is a lifestyle choice made in response to performance pressures in a competitive environment (Racine & Illes, 2008). Illes (2006) described four main ethical challenges related to the use of nootropic: safety, coercion, distributive justice, and personhood. From this, questions emerge: Does greater effort confer “dignity”? Is the student the same person when on Ritalin? Moreover, there seems to be a coercive factor in teachers’ preference for enhanced children because they tend to be more receptive to learning and interactions. That said, the restriction of nootropics could be viewed as coercive when the restriction limits freedom of choice about whether or not to enhance. A further issue is distributive justice because unfairness results between haves and have-nots. The inequities in society, from private tutoring to technological access, it is not an issue specific to nootropics until the question of cheating is added. Is enhancement in itself a form of cheating? Discussions of cheating include issues of fairness and carries de facto moral wrongness when understood as the infringement upon implicit rules and/or the access to inequitable benefits (Lalancette & Campbell, 2012).

Conclusions

The challenges of applying neuroscientific findings to learning technologies are numerous, but have a common denominator: the framework supporting a brain-based cyberlearning has to be well defined and explicit. Attempts have been made by the Association for Educational Communications and Technology to define ethical research and practice. Moreover, attempts have been made to present a framework for approaching ethical issues in the use of educational technologies (Spector, 2016). Herein, there has been a discussion of the ways in which such frameworks can be extended to develop a brain-based cyberlearning approach to ethics that emphasizes the advances in cognitive, affective, and social neuroscience.

Extending the framework to some extent involves the recognition that our mental states are constituted by our neurocognitive and affective states and a shifting collection of external resources and scaffolding. Our understanding of what constitutes a person is partially a function of the student’s environment, inasmuch as the student’s capacities are dependent on features of her context. Moreover, a student’s identity is largely a product of social relations to others.

Following the extended mind thesis, there is a strong prima facie case for ethical concerns accompanying various means of enhancing cognitive performance. While some approaches to learning technologies emphasize ethical principles, neuroethics focuses on the neural substrates subserving cognitive processes. Herein, the emphasis has been upon combining these approaches via an argument that mental processes include not only brains, but also learning technologies, and even classroom social structures. This allows for the ethical concerns of educational technologists, educational neuroscientists, and neuroethicists to extend far more widely than has previously been recognized. Given the extended mind thesis, a number of ethical concerns about using educational technologies can be seen to be neuroethical issues. In making decisions about how educators structure classroom environments and employ educational technologies, decisions can be made about the ways in which technologies of the extended mind are employed, and such decisions must be informed by neuroethical thinking.