Keywords

Introduction

Academic misconduct refers to forms of cheating that involve giving or receiving unauthorised assistance or credit for others’ work. It includes relatively minor and unintentional breaches (e.g., incorrect, inappropriate or absent citation), breaking rules during exams (e.g., looking over at another student’s answers, bringing unauthorised notes) and intentional fabrication of assignments (e.g., buying essays from essay mills, using and contributing material to file-sharing websites). According to Ampuni et al. (2020), the International Centre for Academic Integrity (ICAC) revealed 98.78% of students admitted to engaging in some form of academic misconduct. While some researchers suggest that ethically oriented professions such as nurses engage in cheating less than other professions, some research suggests that their prevalence of cheating is roughly equal to other schools (Bultas et al., 2017; Lynch et al., 2021). At the same time, other studies suggest that engineering and business undergraduates are reported to engage in academic misconduct more than undergraduates in other schools (Freire, 2014; Tabsh et al., 2017).

Two main consequences of academic misconduct arise for both higher education and the wider society. First, universities cannot ensure that graduates will enter the workforce with sufficient expertise for their duties. Researchers in psychology, nursing, and aviation academia have been concerned about the way academic misconduct produces graduates who fail competency requirements for their job (Asim et al., 2015; Bultas et al., 2017; Keller et al., 2012; Lynch et al., 2021). Secondly, academic misconduct undermines the university’s mission of developing responsible, prosocial citizens. Those who engage in academic misconduct are more likely to engage in, or at least accept, other professional misbehaviour such as fraudulent reporting and negligent practices (Macale et al., 2017). Consequently, traditionally trusted professions such as nursing and the degrees graduates earn lose value. For example, Liao et al. (2017) found in a survey of Chinese biomedical researchers that participants believed 40.1% of published scientific articles involved some form of academic misconduct. Farisi (2013) discusses the long-term cultural problems of academic misconduct as a betrayal of truth and knowledge accumulation. Moreover, an erosion of trust among the professional class increases the overall level of corruption in society more than street crime (Sutherland, 1983).

Despite the importance of ethical norms in reducing academic misconduct, the application of formal frameworks in moral psychology to empirical studies on academic integrity are somewhat limited. The purpose of the next section is to demonstrate why focusing on the ethical dimension of academic misconduct will be the most efficient and long-term assurance of enhancing academic integrity and post-graduation professional behaviour. To do this, I evaluate the main approaches to the reduction of academic misconduct: policing, prevention, and ethics.

Approaches to Academic Misconduct Reduction

Farisi (2013) identified three umbrella concepts that capture strategies for dealing with academic misconduct: policing, prevention, and ethics. Below, I explore the contributions and blind spots that each focus has had on empirical investigations of academic integrity.

Policing

Policing focuses on the detection and punishment of misbehaviour. In the academic context, detection often takes the form of text-matching software (Ison & Szathmary, 2016). A punishment for being caught may involve a range of punishments that can span from a warning or a loss of marks to exclusion from a course or institution. In rare instances punishments may include recording the misconduct on a student’s academic transcript, which could affect their employability (Farisi, 2013).

Policing is an inefficient focus on academic behaviour control due to difficulties attached to both detection and punishment. First, detection is difficult because assessment tasks vary widely across disciplines. For example, Franclinton et al. (2020) discuss that in engineering, many computers outside designated labs are not equipped with the programs or power required for engineering assessment tasks. For common assignment formats like essays, Meuschke and Gipp (2013) found that text-matching software had difficulties picking up insidious forms of plagiarism such as translations. Those who could help police academic misconduct (e.g., other students, teaching staff) are often confused about what constitutes academic misconduct or are too overworked to take on the reporting process (de Maio & Dixon, 2022; Lynch et al., 2021; Waltzer et al., 2022).

Even if administrators could optimise the detection process, determining a proportionate and meaningful punishment is challenging. For example, if the bar for reporting an academic misconduct notification on the student’s academic transcript is too high, many minor academic offences that undermine professional ethics later may be overlooked. However, if the bar for receiving a blackmark is too low, the punishment will lose social value and become meaningless. Therefore, improvements in policing strategies may only be a fraction of the solution in upholding academic integrity.

Prevention

Prevention strategies include promoting attitudes and norms against misconduct as well as blocking potential avenues of misconduct from the start (Farisi, 2013). This may involve blocking websites that facilitate contract cheating and identity verification before exams. One prevention strategy based on criminology is the use of situational crime prevention techniques to prevent contract cheating (Clare, 2022). Situational crime prevention focuses on manipulating perceived risks, rewards, effort, provocations and excuses: increasing risks and efforts while decreasing rewards, provocations and excuses attached to misconduct will reduce its likelihood (Eck & Eck, 2012).

The focus on immediate circumstances differentiates situational crime prevention from many other theories that root crime in biology, early experiences and/or socialisation (Clarke & Mayhew, 1988). Such an immediate focus is advantageous because it affords more control to educators, enforcement officers and university administrators who may not have the time or resources to understand students on a deep level. In one case study, Baird and Clare (2017) adjusted aspects of a business unit assignment according to techniques from situational crime prevention. Among several tactics that were drawn from within the situational crime prevention framework, they varied the assignment content between classes (to reduce chances of collusion) and gave students practice time (to reduce the provocation of stress or desperation). These assignment adjustments reduced instances of contract cheating.

While prevention strategies have further-reaching benefits than policing, prevention strategies may also have issues keeping up with the variable nature of assignments across disciplines and historical climate. For example, Garg and Goel (2022) discuss the effects of the COVID-19 pandemic on the proliferation of online-based cheating methods to complement the academic environment’s shift to online learning to slow the virus’s spread. Farisi (2013) foresaw some of the COVID-related issues when considering the difficulty of enforcing academic rules for distance education formats.

Even when a workable prevention strategy emerges, it may be unethical and degrade norms of trust between students and their institution. For example, during COVID, home exams were used and some universities significantly expanded the use of remote proctoring, requiring students to conduct a live video-scan of their environment before starting the exam to ensure no student had unfair aids (e.g., notes stuck to the wall). While this could stop students trying to obtain an unfair advantage, some argued that room scans violated students’ privacy (Camp, 2022). Given the difficulty, and at times unethical nature of prevention strategies, Farisi (2013) considered the role of ethics in promoting academic integrity values and social norms that support virtuous character.

Ethics

The ethical dimension of academic integrity refers to developing norms about honesty and understanding the importance of university environments for training prosocial citizens. Addressing the ethical dimension may involve requirements for students to complete ethics modules or educational sessions about what constitutes academic cheating (Asim et al., 2015).

Tackling the ethical dimensions of academic behaviour is arguably the most enduring strategy for enhancing integrity for two reasons. First, moral character is the most important aspect of person perception. Goodwin (2015) found that people’s judgements of moral character were more important in shaping their opinions about others compared to judgements of sociability or competence. Second, Tappin and McKay (2017) found that one’s sense of moral righteousness is resistant to change in the face of evidence and is stable over time compared to other aspects of self. The profound and long-term effects of developing a culture of integrity and moral duty reduce the need for administrators to micro-manage individuals and academic assessment situations. Next, I evaluate different moral reasoning frameworks that could be used in academic contexts.

Evaluation of Different Ethical Frameworks for Usage in an Academic Context

Although there has been some research on the ethical dimensions of academic behaviour, the categorisation of individual and situational variables has not been systematically considered. Most frameworks used follow cognitive-developmental principles that emphasise internal, individual-level processes. These do not sufficiently capture the effects of situational variables (Wisesa et al., 2019). The cognitive-developmental models I briefly discuss are Kohlberg’s theory of moral development, the academic integrity model, and academic integrity responsibility (Ashford, 2021; Miller et al., 2011; Wisesa et al., 2019). Then, this chapter covers the way two moral psychological approaches conceptualise the emotional vs. cognitive and situational vs. individual factors as applied to academic integrity that the cognitive-developmental models propose but in a more streamlined way. They are the rationalistic dual-process approach and intuitive moral foundations theory.

Cognitive-Developmental Models: Kohlberg’s Theory of Moral Development, the Academic Integrity Model, and Academic Integrity Responsibility

Kohlberg’s (1973) theory suggests that moral development occurs in three overarching stages: (1) pre-conventional, where the individual is most concerned with self-interest and punishment avoidance, (2) conventional, which involves a focus on conforming to social norms and obeying authority and (3) post-conventional, where individuals abide by their own conscience and ethical ideas. Importantly, however, as Kohlberg’s (1973) theory focused on moral reasoning, which does not perfectly predict behaviour. Wisesa et al. (2019) used Kohlberg’s (1973) theory to categorise qualitative responses about reasons for cheating or not cheating. They found that students who gave post-conventional responses for academic honesty were less likely to report cheating in a separate questionnaire about the prevalence and engagement in academic misconduct compared to students lower in Kohlberg’s moral developmental stages. Kohlberg’s theory has been used in other academic behaviour contexts. Kiser et al. (2009) for example, used Kohlberg’s theory of moral development to understand undergraduate students’ responses to moral dilemmas within the realm of information technology.

According to Ashford (2021), the academic integrity model suggests that the path to ethical behaviour requires four steps: (1) moral awareness, where administrators highlight to students the importance of academic integrity on thinking, learning and assessment expectations, (2) moral justification, which considers the purpose and benefits of ethical behaviour, (3) moral intent, which involves foresight of potential hindrances to ethical behaviour due to psychological distancing, and (4) the execution of a moral action. Ashford (2021) broke down these steps in the context of helping students understand and take responsibility for the effect of apps and technology on their learning, a characteristic he called socio-techno responsibility.

Academic integrity responsibility suggests two main themes drive people’s chances of engaging in misconduct: ownership of integrity and the idea of ethical culture being a collective responsibility (Miller et al., 2011). Miller et al. (2011) found that students who cited fear of punishment as a primary reason for following rules were more likely to cheat than students who reported not cheating due to personal character. Rundle et al. (2019) supported Miller et al.’s (2011) finding in their investigation of students’ reasons for not cheating, where a motivation for learning, self-control and desire to assert one’s competence were primary reasons.

Cognitive-developmental models attempt to trace the decision-making processes an individual may undertake before choosing ethical or unethical behaviour. However, as Wisesa et al. (2019) noted, they are often limited in their address of situational variables that are equally important in driving academic behaviour. The main reasons Tippitt et al. (2009) reported for engaging in academic misconduct imply a need to consider both situational vs. individual variables and emotional vs. rational factors that drive academic misconduct: a desire to outcompete peers; lack of preparation and desperation; not learning this behaviour is wrong; and the thrill of trying to avoid detection. The example of academic misconduct supports the idea that moral behaviour contains both rational-cognitive and emotional-intuitive components (Ampuni et al., 2020). Moral decision-making in the academic context is both a calculus of the costs vs. benefits of cheating as well as sense of self-confidence, self-imposed pressure, and feeling that the institution is meeting obligations towards students (Miller et al., 2011).

In the next sections, I outline two moral frameworks and their potential applications to academic misconduct research: the rational-cognitive dual-process model and the emotional-intuitive moral foundations theory. The dual-process model and moral foundations theory provide ways of adjusting and measuring the situational variables in a systematic way. In reviewing these models, I explore the main premises of the dual-process model and moral foundations theory, followed by their usability for empirical studies into academic integrity.

Dual-Process Model

The dual-process model suggests that people’s moral reasoning either follows utilitarianism or deontology. Utilitarianism deems behaviour as moral if it aims to maximise wellbeing for the most people (Bentham, 1789). Deontology indicates that a behaviour’s moral worth should be determined by how well it adheres to existing moral standards (Kant, 1785).

Researchers often determine people’s preferred decision-making framework using their responses to a set of moral dilemmas: the content can be adapted to a variety of contexts such as medical, military, or national security (Christen et al., 2021; Gosling & Trémolière, 2021; Shao, 2020). The chosen moral dilemmas often place the utilitarian and deontological response in opposition with one another so that participants must choose between them (Kahane et al., 2015). The classic (although much debated) moral scenario is the trolley problem: a trolley is hurtling down a train track where five people are strapped down (Salvador, 2019). You (i.e., the survey taker) can choose to leave the trolley to kill the given people or flick a switch that will divert the train onto another track where only one person is trapped. Conventionally understood, the utilitarian response is to flick the switch since this will preserve five people and kill one. The deontological response is to leave the trolley to kill five people because moral norms suggest it is wrong to initiate an innocent person’s death but acceptable to allow an existing misfortune’s continuation. In moral psychological studies centred on a dual-process model, researchers usually calculate participants’ utilitarian and deontological leanings based on forced-choice responses to a battery of sacrificial moral dilemmas where utilitarian and deontological choices are incompatible.

Initial neuroscientific and cognitive research into utilitarian vs. deontological thinking deemed utilitarianism a mark of rational calculation and deontology a result of emotional reasoning. Kahneman (2011) contends that people use two cognitive systems: Systems 1 (consisting of immediate, intuitive responses) and Systems 2 (consisting of rational calculation and statistical reasoning). Greene et al. (2004) suggested that deontological reasoning resulted from System 1 process and utilitarianism was connected to System 2 process. However, Gamez-Djokic and Molden (2016) undermined the emotional-rational divide imposed on the deontological-utilitarian difference. Instead, they found that utilitarian thinking could be made more emotional by increasing the potential reward in hypothetical moral dilemmas. Deontological reasoning was connected to participants considering their decision against existing semantic decision rules. Instead, Gamez-Djokic and Molden (2016) suggested that the utilitarianism and deontology were actually differentiated by contrasting motivational focuses: risk aversion (i.e., more motivated by the prospect of potential harm) vs. reward orientation (more motivated by the prospect of potential reward) respectively. People who follow a utilitarian framework are willing to exact harm with the hope of increasing wellbeing outcomes for more people. People who follow a deontological framework fear the emotional and physical risks attached to violations of moral conventions.

The dual-process model and its accompanying battery of moral dilemmas have been appropriated for many issues such as medical triage during COVID-19 and terrorism incidents (Bloom et al., 2020; Tutić et al., 2022). The main advantages of the moral dilemma paradigm are that it is easy to administer and adjust the impact of reward or risk in the scenario. Furthermore, the utilitarian vs. deontological divide appears to capture reality-based biological and psychological differences as shown in neurological and cognitive research (Greene et al., 2004). Next, I consider challenges to the dual-process model and moral dilemma structure.

Ongoing Debates to the Dual-Process Model

The moral dilemma paradigm faces various challenges. Kahane et al. (2015) states that a sacrificial moral dilemma paradigm cannot capture the reasoning process a reader underwent before choosing the assigned ‘utilitarian’ response (i.e., the one that causes harm to one party to avert a greater harm). First, moral dilemmas tend to be melodramatic, fantastical, and almost always sacrificial, which deviates from everyday moral situations. Second, researchers have found various drivers behind so-called utilitarian responding that are unrelated to any concern for the greater good: these drivers include psychopathy and general tendency to choose action over inaction (Gawronski et al., 2017). However, the unknown nature of reasoning processes can be overcome by accumulating evidence from varied experimental designs. For example, Gamez-Djokic and Molden (2016) used qualitative short responses to understand the reasoning behind people’s moral decisions. Qualitative methods still contain the problems of self-report modes (e.g., social desirability bias, interviewer effects) like closed-response surveys. However, the utilitarianism/deontology divide depends on rational conscious thinking as opposed to emotional intuitions, making self-report and self-reflection a valid way of understanding them.

Further blurring the differences between utilitarian and deontological thought is the extent to which non-framework-related situational and personal factors influence moral decisions. For example, in the trolley dilemma, people are more likely to kill one person to save the multiple in a version where they simply press a switch to change the tracks compared to the version where they must push the one person in front of the train to halt it before it kills others (Klenk, 2022). Other situational factors that affect moral decision-making include cognitive pressure (e.g., time limits) and incidental emotions. Personal factors that affect moral decisions include psychopathy (which is linked to choosing ‘utilitarian’ options) and gender (females being more included to deontological reasoning and males to utilitarian reasoning) (Friesdorf et al., 2015; Klenk, 2022). However, it is also arguable that some of these factors that appear to be unrelated to frameworks may actually influence people’ perceptions of norms and outcomes, the key concepts underlying deontology and utilitarianism respectively.

Dual-Process Usage in Academic Integrity Context

The dual-process model may explain some demographic differences commonly found in academic integrity literature. For example, Miller et al. (2011) found that students who cited punishment (a utilitarian consideration) as a reason for not cheating were more likely to cheat than students who cited personal integrity (a deontological consideration). Females are less likely to cheat than men: females are also more likely to cite deontological reasons for choosing honest behaviour while men are more likely to make utilitarian justifications for their actions (Friesdorf et al., 2015; Rundle et al., 2019). Fink et al. (2022) concluded that promoting academic integrity based on moral principles and ideas of autonomy (deontological concerns) rather than punishment or competition (utilitarian concerns) will likely produce more honesty among students. Appealing to deontological concerns may increase female students’ inclination to academic honesty compared to men, although the relationship between reasoning style and gender requires further testing.

Next, according to Gamez-Djokic and Molden’s (2016) findings, students who maintain academic integrity may either be measuring the opportunity to engage in misconduct against semantic decision rules (if deontological) or cost-benefit calculation (if utilitarian). Appealing to a deontological mindset would involve inducing fear and guilt about the prospect of engaging in academic misconduct. This strategy would not work on a utilitarian mindset, which focuses primarily on potential rewards and in settings where engaging in malpractice may seem more worthwhile than honest work. Students who follow a utilitarian reason for avoiding misconduct may see completing the assignment themselves as more rewarding and effective than hiring a ghost writer. Appealing to a utilitarian mindset would involve suggesting the inefficiency of misconduct compared to self-completed work.

It is also important to consider the implications of ongoing debates from the moral decision-making literature on our interpretations of academic behaviour. When designing campaigns or policies against academic misconduct, the effectiveness of appeals to utilitarian vs deontological senses may vary according to situations and personal characteristics. For example, psychopathy will reduce the likelihood of responding to deontological messaging (Marshall et al., 2018). In addition, imposing time pressure makes people more likely to choose a deontological rather than utilitarian option in sacrificial moral dilemmas (Klenk, 2022). In an academic context, time pressure differs between assessment types. For example, exams involve higher time pressure than take home assignments. Comparing the reasoning behind and likelihood of misconduct in exams vs. take home assignments could drive the way messaging about academic integrity in different assignment types targets various moral senses.

The academic integrity context can also help us understand the way deontology and utilitarianism function in the real world. While many moral dilemmas suffer the flaws of being melodramatic, the academic integrity setting generates moral dilemmas of many seriousness levels. Kiser et al. (2009) used the moral dilemma paradigm and Kohlberg’s framework to understand students’ beliefs about ethical technology usage. Many of these dilemmas could be used or appropriated to an academic integrity context beyond the technological space. For example, one scenario was ‘should a student pretend to be a cancer patient in an online chat room in order to gather information for a paper he/she is writing for a class?’ (Kiser et al., 2009, p. 94). Situational aspects of the scenario can be manipulated by changing the imposter identity (e.g., another student) or consequences of the assignment (e.g., whether it is used to inform important patient decisions vs. a participation requirement for educational purposes only). It is also possible to compose dilemmas that overcome the confound between utilitarian responding overlapping with the ‘take action’ option: for example, action (e.g., exposing a colleague’s fraudulence) could align with deontological reasoning while remaining passive (e.g., overlooking obvious cases of plagiarism) could align with utilitarian reasoning (e.g., saving trouble for staff members and student from a financially struggling family). Table 3.1 shows different variations of an academic misconduct dilemma that manipulates different aspects of a dual-process approach.

Table 3.1 Variants of the Kiser et al. (2009, p. 94) academic integrity scenario that manipulate factors known to affect people’s tendencies to choose deontological vs. utilitarian decisions. The original is: ‘should a student pretend to be a cancer patient in an online chat room in order to gather information for a paper he/she is writing for a class?’

The dual-process model can add to our understanding of moral decision-making in academic contexts. However, academic integrity is a mix of emotional and cognitive factors. The dual-process model arguably confines moral reasoning to different forms of logical calculation at the expense of intuitive factors. In the next section, I explore a more emotionally inclined approach to moral behaviour: moral foundations theory.

Moral Foundations Theory

Haidt (2013) developed moral foundations theory as a framework for understanding intuitive responses to moral perception. In contrast to the rationally-focused cognitive developmental theories, Haidt (2013) likens morality to a set of intuitive taste receptors known as moral foundations. Moral foundations theory was developed from a thematic analysis of moral codes across different cultures. The themes are listed as five corresponding ‘virtue/vice’ pairs: care/harm, which refers the amount of harm or benefit inflicted; fairness/cheating, which is concerned with people reaping undeserved benefits; authority/subversion, which respects beneficial relationships within hierarchy and social order; sanctity/degradation, which is about protecting the individual and others from contamination; and loyalty/betrayal, the extent to which people side with their in-group (Haidt, 2013).

Moral foundations theory was designed to provide an intuition-based alternative to the hyper-rational approach embedded in Kohlberg’s (1973) theory of moral development as well as the dual-process model that dominated prior moral psychology. Rather than suggesting people’s moral decisions depended on rational calculations, moral foundations theory claims that moral responses are emotionally triggered then justified after they occur (Haidt, 2013). These modules were evolutionarily adapted to protect against anti-social tribe members and potentially disease-ridden stimuli (Haidt, 2013; Sznycer & Lukaszewski, 2019). The sensitivity of each module depends on socialisation and individual differences (Landmann & Hess, 2018).

The main contributions of moral foundations theory have been in political psychology research, specifically the differences between progressives and conservatives (‘left’ vs. ‘right’) and campaigns about controversial topics (Musschenga, 2013). Faced with an increasingly polarised political space in America, Haidt (2013) sought to help people across the political divide see one another as people with differently sensitised moral foundations rather than people fighting for good vs. evil. He observed that progressives valued care followed by fairness while conservatives valued all five foundations equally. For example, progressives are more likely to approve wealth redistribution policies for the sake of supporting vulnerable populations. In contrast, conservatives are more likely to disapprove due to their perception of redistribution (via taxation) as theft from taxpayers.

Moral Foundations Theory in Academic Integrity Context

Moral foundations theory’s emotionally focused themes can contribute to approaches that universities can take to reduce academic misconduct. First moral foundations theory suggests that moral judgements are intuitive first and then rationalised later. Indeed, students have been found to neutralise unethical behaviour by coming up with post-hoc rationalisations. For example, researchers have found that students justify misconduct by downplaying the consequences of cheating, devaluing the worth of the assignment or suggesting that cheating is a norm in their cohort (McCabe, 2016). Thus, academic integrity approaches such as existing criminological frameworks and dual-process model may be incomplete in their focus on cost-benefit calculus. The emotional basis of moral judgements may also explain students’ susceptibility to misbehaviour due to moral disengagement or the deactivation of guilt (Ashford, 2021; Curtis et al., 2022). Similar to Haidt’s (2013) account of moral justifications, guilt deactivation usually involves after-the-fact rationalisation through techniques such as euphemism, advantageous comparison, and distorted views of consequences. According to Newton and Lang (2016), essay mills have already targeted the emotional bases of cheating behaviour. For example, they use flawed advantageous comparisons (e.g., claiming that academia is inherently corrupt) and minimisation of behaviour (e.g., describing their services as ‘homework help’ and ‘exemplar answers’) to reduce guilt attached to cheating.

The moral foundation of fairness/cheating may also explain the significant influence of norms on cheating: students who believe cheating is normal or unpunished may find it unfair to exert honest, hard effort while their peers are taking the ‘easy’, illicit way out (see Curtis, 2023). Alternatively, Marsden (2016) found that students who insisted on ‘A’ grades were more likely to cheat: this may be because the perceived consequences of failing to achieve an ‘A’ are inflated compared to the consequences of cheating. Understanding the moral foundations that students are sensitive to may improve administrators’ and educators’ capacity to communicate the severity of academic cheating. For example, while academic integrity primarily engages the fairness/cheating module, it also relates to the sanctity/degradation module for religious students. Studies by Onu et al. (2021) and Ridwan and Diantimala (2021) found that students with high levels of religious knowledge were less likely to cheat than those with lower levels of religious knowledge.

Ongoing Debates in Moral Foundations Theory Research

Despite the potential contribution of moral foundations theory, challenge to its underlying theoretical structure and basis in reality may undermine its capacity to unpack moral emotions behind academic misconduct. There remain questions about the latent structure of moral foundations theory and extent to which specific emotions connect to moral foundations. One reason for this is that moral appeals to harm and purity are extremely difficult to activate separately, especially in real life moral dilemmas. Landmann and Hess (2018) found that violations of any foundation triggered anger and contempt but only purity was linked specifically to disgust. The lack of specificity in triggered emotions led Landmann and Hess (2018) to propose a collapsed moral foundations structure that identified three instead of five modules: suffering, intentional norm violation, and purity violation. However, there are other proposals to expand moral foundations theory to six, adding in a liberty/oppression dynamic that refer to feelings of resentment towards authority (Haidt, 2013).

The instability of moral foundations theory’s core structures undermines its attempt to ground itself in reality through evolutionary explanations. Haste (2013) states that many of these explanations are post hoc and are inappropriate for investigating sociological, artificial constructs such as political differences (or academic integrity). She suggests that the idea of evolutionary wiring weakens under the fact that morally-triggering stimuli can change according to context (e.g., changing attitudes towards first-cousin marriages, becoming accustomed to the excreta of dependants etc). With the instability of moral foundations theory’s proposed modules, Gray and Keeney (2015) suggest that moral foundations theory may simply be a repackaging of progressive vs. conservative differences, which makes it too specific to an American context to qualify as a universal psychological framework.

Conclusion

Academic integrity is important for ensuring that students leave higher education with the appropriate skills and ethical character for professional success. This chapter highlighted the need to focus on the moral aspects of academic integrity that involve creating a sense of cultural integrity and personal duty to uphold ethical behaviour. Improvements to the moral dimension of academic life will have the most profound impacts not only on the learning environment but professional life. However, the moral aspects of academic integrity have not yet capitalised on frameworks in moral psychology. Therefore, I explored the way the dual-process model and moral foundations theory can enhance our understanding of moral reasoning in an academic context.

Connecting academic behaviour with the dual-process model’s utilitarian vs. deontological differences in regulatory focus can help design effective messaging against academic misconduct. The moral dilemma paradigm can also provide researchers with systematic way of adjusting the variables that may influence students’ perceptions of specific misconduct scenarios. Moral foundations theory’s emphasis on the emotional aspects of moral behaviour suggests that the focus on rational calculation inherent in dual-process and other cognitive-developmental theories are incomplete. Moral foundations theory implies the importance of adjusting emotional register around discussions of academic misconduct and honesty.

A potential drawback of both dual-process and moral foundations theories is that both are open to the effects of ambiguous interpretation. For example, a so-called utilitarian option in a moral dilemma can be reached via deontological reasoning, and the harm vs. purity facets in moral foundations theory are difficult to separate in real life scenarios. However, the dual-process model arguably excels in providing an explanation and experimental paradigm to moral behaviour compared to moral foundations theory, which relies on descriptive trait measures and tenuous basis in reality.

Nevertheless, both may provide valuable insights into the reasoning behind students’ choices to follow or break academic rules. While current academic integrity literature has investigated cognitive and emotional aspects of cheating, these are often explored in isolation even though the ethical aspect of behaviour is both cognitive and emotional. Moral frameworks, particularly a combination of the rationalist dual-process model and affective moral foundations theory, provide an integrated method for understanding the interaction between cognitive and emotional factors that affect the decision to engage in misconduct or not.