Introduction and background

Learning analytics is commonly defined as the “measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs” (Long and Siemens 2011). Ferguson (2012) notes however that this definition covers most educational research, and identifies two additional assumptions: that learning analytics make use of pre-existing, machine-readable data, and that its techniques can be used to handle ‘big data’, large sets of data not practicable to deal with manually. Ochoa et al. (2014, p. 5) further observe that “learning analytics is a new, expanding field that grows at the confluence of learning technologies, educational research, and data science”, and they imply that learning analytics has the potential to solve two simple but challenging questions:

  • How do we measure the important characteristics of the learning process?

  • How do we use those measurements to improve it?

Overall, learning analytics are receiving a lot of attention in the Australian higher education sector (Lockyer et al. 2013) due to multiple drivers towards engagement with, and application of, analytics in the higher education sector and more generally in society (Larusson and White 2014; Clow 2013; Buckingham Shum and Ferguson 2012). These include the ‘technological revolution’, both in terms of technological advancements as the ‘cultural norm’, and in terms of advances in technological capability itself (Kirkwood and Price 2014). Additionally, an increased focus on accountability in higher education (Universities Australia 2013) has led to political interest in harnessing analytics capability in the sector (Ali et al. 2013; Ferguson 2012; Mattingly et al. 2012; Huijser et al. 2016). Learning analytics offers much potential to understand how services to students and student learning can be improved (Knight et al. 2014); however, there are also challenges, some of which relate to ethical elements.

Grappling with ethics in relation to learning analytics has only just begun. Some researchers have wrestled with legal concepts such as privacy and informed consent (Cormack 2016; Prinsloo and Slade 2015; Drachsler et al. 2015; Pardo and Siemens 2014; Heath 2014), while others are beginning to look at ethical decision making (Gibson 2016; Swenson 2014; Slade and Prinsloo 2013; Willis et al. 2013). Concerns also run deeper, drawing attention to the value base that sits behind the tools and techniques, which are purported to be value free (Perrotta and Williamson 2016). This work has led to an acknowledgement of both the need for ethical practice and the complexity of the field. Noting this complexity, some researchers are advocating a set of principles to guide the process (Prinsloo and Slade 2013).

There are two key approaches to articulating these principles: one is value-based and the other, while acknowledging the role of values, steers clear of articulating them. An example of the first approach positions learning analytics within a broader student learning engagement framework and is based upon the foundation value of social justice (Nelson and Creagh 2013). The second approach generally identifies a set of principles often aligned with those of research practice (e.g. Drachsler and Greller 2016). One example of the second approach is demonstrated by Prinsloo and Slade (2015, pp. 12–14), who identify the following six principles to inform a “guiding framework for considering learning analytics as moral practice” (p. 12):

  1. 1.

    Learning analytics as moral practice

  2. 2.

    Students as agents

  3. 3.

    Student identity and performance are temporal dynamic constructs

  4. 4.

    Student success is a complex and multidimensional phenomenon

  5. 5.

    Transparency

  6. 6.

    Higher education cannot afford to not use data

Prinsloo and Slade (2016) have further developed this thinking by connecting it to the key concept of student vulnerability, within which inherent and situational vulnerability are distinguished as dispositional and occurrent states. Each of these presents different challenges in terms of the use and application of learning analytics.

Thus, the second approach acknowledges the core role of the broader context in the development of learning analytics. As Prinsloo and Slade (2013, p. 12) argue:

an institution’s use of learning analytics is going to be based on its understanding of the scope, role and boundaries of learning analytics and a set of moral beliefs founded on the respective regulatory and legal, cultural, geopolitical and socio-economic contexts. Any set of guidelines, concerned with the ethical dilemmas and challenges in learning analytics, will necessarily also be based on a set of epistemological assumptions. As such, it would be almost impossible to develop a set of universally valid guidelines, which could be equally applicable within any context.

The critical role of context is reinforced in a range of learning analytics research in relation to both ethics and more broadly (Drachsler and Greller 2016; Sclater 2016; West et al. 2015a; Colvin et al. 2015). The importance of context and the concept of ‘moral practice’ raises some fundamental challenges, as ‘moral’ can actually mean different things to different people and is contextually bound.

Findings from an Australian national research project on the use of learning analytics for student retention reinforce many of the messages in the literature and provide some essential background to this paper (see: http://www.letstalklearninganalytics.edu.au/). This study was commissioned by the Australian Office for Learning and Teaching with a strategic priority grant, focusing on learning analytics. In particular, the focus of this study was on the potential of learning analytics to aid student retention. The study used a mixed-methods approach, the primary data collection methods of which included two surveys targeted at different groups (one academic-level survey and one institution-level survey, the latter being people in strategic decision making positions), as well as a series of follow-up semi-structured interviews with self-selecting participants from the surveys (23 participants from 15 different institutions).

The headline findings of this study, reported elsewhere (West et al. 2015a, b), indicate that:

  1. 1.

    The sector in Australia is at an early stage of development, implementation and understanding around learning analytics.

  2. 2.

    Context is critical and underpins the development, implementation and use of learning analytics for retention.

  3. 3.

    Tensions exist around the extent to which learning analytics can drive actions and behaviours or take the functions of people.

  4. 4.

    Tensions exist between ‘business’ needs, wants and limitations (e.g. costs) and ‘educational’ needs and wants (e.g. academic freedom, and innovation in learning and teaching)

  5. 5.

    People across institutions have a key role to play in leveraging the opportunities of learning analytics, which must take account of the relationships between strategy, planning, policy and action.

  6. 6.

    Establishing relevant business and educational questions is critical.

Many institutions are looking to policy to guide decision-making practice (e.g. Welsh and McKinney 2015; The Open University 2014). However, as noted by Prinsloo and Slade (2013, p. 244) in their review of policies of two institutions, the documents “lack(ed) explicit guidance for the questions, issues and ethical challenges to institutionalise learning analytics.” Additionally, many institutions rely solely on broader ICT and student learning policies that pre-date the emergence of learning analytics (West et al. 2015b).

Thus, we argue in this paper that ethical principles should underpin institutional decision making in relation to learning analytics. This will require a multi-layered approach where values, expectations and actions all align, are made explicit, and where ethical literacy is acknowledged and developed (Swenson 2014). Ethical literacy goes beyond legal frameworks alone and is not simply about compliance, but rather refers to the ability to make informed and ethical decisions based on interpretations and analysis of the ethical implications of learning analytics-related practice at every stage of implementation and application. This is important, as the legal and regulatory context is often slower to respond than advances in analytics-related technology would demand (Gibson 2016).

Ethical theories

Many issues and/or questions that arise in the context of learning analytics have no simple answers. These can be guided by a set of ethical principles such as those presented by Drachsler and Greller (2016), Prinsloo and Slade (2015), or Nelson and Creagh (2013). However, as Sclater (2016) notes, not only do most of the questions that arise have both an ethical and legal dimension, but many are also logistical in nature. Moreover, even the logistical questions are often a matter of judgment and raise what could be considered ethical dilemmas or questions, and are therefore likely to shape future directions, which relates to the values that underpin an institution. For example, even an apparently straightforward question, such as ‘should the student dashboard be activated so students can access analytics about their own academic performance’, does not necessarily have a simple answer. The ‘answer’ depends on a number of potential factors including the principle(s) and values that the institution holds, the demographics of the student body (e.g. is there a higher percentage of non-traditional students who may not be confident in their abilities), the policy environment, and the available resources to support students in interpreting analytics data and responding appropriately).

This paper seeks to address ethics beyond the fundamental constraints of the political economy context, which is in place as a given, and includes basic contextual legal, political, and economic factors. Engaging in an ethical decision making process prompts consideration and acknowledgement of our values (which each individual has) and context to identify the ethical principle(s) that are being applied to the decision making process. As such, it is essential that such principles are understood and de-constructed to acknowledge the position and value base from which they arise.

Broadly, ethical frameworks can be divided into deontological, virtue, and consequential ethics. Deontological ethics are based on the idea of duty and connected to rights. The work of Kant, Rawls and Ross is deontological (Cassuto Rothman 2005). In the Kantian view there is the “idea that there are moral absolutes in the world…things which are quite simply wrong, morally, just as there are things which are wrong logically or arithmetically” (Reamer 1993, p. 30). Kant’s ideas are derived from the concept of duty, and he argues that humans are the ends rather than the means. If this is applied to learning analytics, people have the right to know that their data are being collected and/or that they have the right to access their own data. Of course, this still raises questions around where other people’s data fit.

Virtue ethics date back to Aristotle. As Reamer (1993, p. 41) explains, “an action is the right thing to do, not because of some sort of calculation as to its consequences, or because it concurs with a ‘duty’, but because it is consistent with virtue.” In this context, virtue is aligned with qualities such as courage, honesty, helpfulness and is consistent with the idea of living life as fully as possible. Reamer (1993, p. 42) goes on to explain that, “virtue ethics differs from deontological and consequentialist ethics in that it focuses not on actions, but on agents—that is on the persons carrying out the actions.” Again, if this is applied to learning analytics, then in line with honesty (a virtue) it should be disclosed (transparently and accessibly) to students that their data is being collected. In a different example, if collection of a wide range of data is a ‘good’ thing because it is being collected for the ‘right reasons’ (i.e. to be helpful), then that is acceptable. However, what if the data being collected is related to academic performance? What might be appropriate for a student might not be so for an academic. The virtue of analytics could in this case lead to a debate around academic freedom versus student learning. The ethics around such examples are far from clear (Slade and Prinsloo 2013).

Utilitarianism is considered a consequentialist approach. Within this frame, the benefits to the various players are considered and action is related to benefit for the largest number of people (Beauchamp and Childress 1994). In this view, if the majority of students benefits from presenting a risk framework to assist them in their progression, then that is what should be done. However, questions can be raised around whether the short-term gains and losses are the same as the long-term ones. For example, showing students how they are doing in one class right now, in relation to other students, might help them at this moment in time, but what are the potential cumulative consequences of this, and where are these addressed? For example, the impact could be negative if students see that they are doing poorly in relation to their peers across all of their subjects, as it may have a detrimental effect on their confidence or sense of belonging, both of which are seen as related to student success and retention (Nelson et al. 2014; Willcoxson et al. 2011). Alternatively, not showing students how they are doing in relation to other students may constitute a missed student learning opportunity. Thus, there may be cost/benefit differences in terms of the institution/academics and students, or differences in the costs/benefits to different cohorts of students.

In these utilitarian frameworks, some people’s costs are not usually weighted as highly as others. Swenson (2014, p. 249) illustrates this scenario in discussing the application of Bernadette Longo’s work, and states that consideration should be given to “…who has the power to:

  • Make decisions about the learning analytics model and data,

  • Legitimize some student knowledge or data and not others,

  • Focus on potential intervention strategies and not others,

  • Give voice to certain students and not others, and

  • Validate some student stories and not others.”

These points provide a useful initial set of factors to consider in developing a framework around learning analytics-related ethical practice and ethical literacy, including ethical principles.

Ethical principles

From these broad theories a set of principles can be developed to underpin decision-making processes and provide guidance in the application of ethics. The key principles, as outlined by Beauchamp and Childress (1994, p. 395), and used in healthcare settings, are also relevant to the discussion of learning analytics:

  1. 1.

    Respect for autonomy generally translates to the idea of self-determination and the right of people to make their own decisions.

  2. 2.

    Non-maleficence essentially means that we should do no harm.

  3. 3.

    Beneficence means that in addition to doing no harm, we should also pursue good outcomes for others.

  4. 4.

    Justice translates into the concept of fairness and is often related to the distribution of resources based on equity, need, effort, merit and the market.

In addition, Swenson (2014) challenges us to consider the values that underpin applications of learning analytics in different institutional contexts. This becomes particularly important as we enter the ethically contested space of learning analytics. Some in the field maintain that learning analytics itself is value neutral but it is in the ways in which institutions apply it where ethical issues and questions really come to the fore (Drachsler et al. 2015). Yet others argue from a sociological perspective that the “methods used for the classification of and measurement of online education are partially involved in the creation of the realities they claim to measure. Hence, these methods are partaking in an active generation of social realities” (Perrotta and Williamson 2016, p. 2). Drawing on the work of others (e.g. Ozga 2016; Bowker and Star 1999; Uprichard et al. 2008), Perrotta and Williamson (2016, p. 9) discuss how key areas of governance, policy making, power, authority and commercial interest are ‘entangled’ in relation to learning analytics, as proponents believe “in the neutral and ‘pure’ nature of tools and methods of data analysis…”

Additionally, learning analytics, which is explicitly focused on learning (Gašević et al. 2015; Ochoa et al. 2014; Clow 2013), is only one of a variety of types of data analytics, which sometimes overlap. This further complicates ethical questions and dilemmas, as the same data can be used by different organisational units for different purposes (Kambatla et al. 2014; West 2012). Therefore, the ethics of learning analytics must be driven by an institutional value base, as well as ethical principles, since there are often no inherently right or wrong answers to the questions posed by applying learning analytics in practice. The value base of the institution should be the ethical and practical compass. If that value base, and thereby the institution’s ethical approach, is made explicit, then informed choices can be made. The specific context of individual institutions is therefore a crucial element in this respect.

The importance of context

A university’s value base is developed over time and based on various factors including historical foundation, geographic location, student cohort and leadership. Institutional values evolve into an institutional culture that should be reflected in its policies and processes, its focus, and ultimately the decisions that are made. Clarity around this value base is important at every point in the learning analytics journey but particularly critical in the early stages of its implementation (Pardo and Siemens 2014). Implementing learning analytics without such an explicit focus may lead to a range of undesirable scenarios, including: decisions that are not explicit in their ethical underpinning; decisions at odds with the intentions of the institutional value base; systems that do not achieve their aims; conflicting decisions due to conflicting ideologies; interventions that are at odds with the data; and a full range of potentially unintended consequences.

The implementation phase is critical because many decisions are made that will be difficult or expensive to change later. Many of these decisions appear deceptively simple but they will have substantial flow-on effects. People may draw conclusions based on their own value base if the institutional values are not clear, or consistent in their intention. Clearly, the complexity of potential ethical scenarios illustrates that defining and achieving ethical practice with learning analytics will be challenging. This paper explores levels of understanding and concern in relation to ethics and learning analytics, with data from an Australian national study entitled Learning analytics: assisting universities with student retention, the methodology of which is outlined next.

Methodology

The study employed a mixed-method design featuring three distinct data collection and analysis processes:

  1. 1.

    Institution level survey to gather data on infrastructure, policies, strategy, governance and concerns related to learning analytics from an institutional point of view. The survey included 43 questions, to gather a mix of both quantitative and qualitative data. Questions were developed drawing on technological maturity models and frameworks (ACODE 2014; Arnold et al. 2014; Chatti et al. 2012; Norris and Baer 2012, 2013; Marshall and Mitchell 2006) and themes emerging from the literature. The survey was piloted in the study project’s partner institutions and questions were further refined based on feedback received through this process as well as feedback from the project reference group. The electronic survey was emailed to Deputy Vice Chancellors (Academic) (DVCAs) in all registered Australian universities (N = 40) during July and August 2014. Twenty-two institutions completed the survey representing a response rate of 55 % of all Australian higher education institutions.

  2. 2.

    An Academic level survey to gather data from individuals working in academic positions in Australian universities about their understanding, use of, and concerns about learning analytics. It consisted of 34 questions eliciting both quantitative and qualitative data. Due to the exploratory nature of the project, questions were developed by drawing on key themes emerging from the literature. The survey was piloted with 20 academics in partner institutions as well as project reference group members, and it was further refined based on the feedback. A purposive snowball method was employed with distribution through professional bodies (e.g. Australasian Council on Open, Distance and eLearning, and the Council of Australian Directors of Academic Development) and relevant institutional contacts. A total of 353 valid, anonymous surveys were completed between 2 September and 13 November 2014.

  3. 3.

    Interviews All participants in the academic survey were invited to participate in the interview stage of the project. Twenty-three participants covering a range of positions and functions in universities (e.g. teachers, student support staff, and academic developers) were interviewed. The 15–30 min interviews were conducted via telephone and consisted of five open-ended questions exploring the themes that emerged from the quantitative data.

  4. 4.

    Case studies Each of the five project partner institutions prepared a reflective institutional case study based on a set of structured questions related to context, infrastructure, governance and application of learning analytics.

Ethics approval for the overall study was obtained from Charles Darwin University’s Research Ethics Committee. Potential participants were provided with information sheets outlining how their data would be used and all participants provided their informed consent.

While the two surveys covered a wide range of factors related to learning analytics, this article draws specifically on two questions (one open and one closed) from the institutional survey, and four questions from the academic survey (three open and one closed), as well as demographic data from the surveys:

Institutional survey:

  1. 1.

    Does the institution inform relevant parties (e.g. lecturers and students) about how learning analytics is used and how it might affect them?

  2. 2.

    What ethical issues have been raised in the institution’s work around learning analytics?

Academic survey:

  1. 3.

    Do you have any concerns about ethical issues related to learning analytics?

  2. 4.

    What is your level of concern about a list of learning analytics issues?

  3. 5.

    What, if any, ethical principles do you think should guide the use of learning analytics?

  4. 6.

    Are there any ethical concerns you have that have not yet been covered or that you would like to expand on?

All survey data was exported to Statistical Package for Social Sciences (SPSS) for analysis of quantitative data. Qualitative data was analysed in two different forms by two independent coders:

  1. 1.

    responses coded, based on ethical theories and principles (Q2, 3, 5), identified in the literature review, and itemised in advance of the coding by the project team.

  2. 2.

    open coding to explore the emerging themes (Q6 and interview data)

Additional strategies to promote data trustworthiness and authenticity were adopted (e.g. use of additional coders to achieve consensus on open coding where agreement was not initially evident in the analysis of the first two coders; triangulation of data; reflective commentary) (Northcote 2012; Shenton 2004). This included taking tentative findings back to those within the sector as part of a national forum with 148 participants.

The broader study was exploratory in nature and as such incorporated both quantitative and qualitative methods from an interpretive positon. It is beyond the scope of this paper to present all relevant findings that provide the broader context. However, the results of the broader study and further information relating to the methodology are available through the project website (http://www.letstalklearninganalytics.edu.au/project-resources-and-outputs/) as well as a range of publications (West et al. 2015a, b; Huijser et al. 2016). What follows next is thus a selection of relevant findings.

Findings

A total of 22 Australian universities responded to the institutional survey, representing a good cross section of institutions in relation to size, location, focus and student cohorts. There were two relevant questions in this survey, one closed-ended question, which asked about informed consent in relation to learning analytics data collection and use, and the other an open-ended question, which asked about ethical issues during the implementation of learning analytics.

In terms of the former, only 4 out of 22 institutions indicated that all relevant parties in their institutional context were informed about the use and potential impact of learning analytics. Eleven institutions indicated they informed some relevant parties while the remainder (7) either were not sure (4), or indicated that no stakeholders were informed (3).

The ethical issues raised focussed predominantly on privacy and ethical use of data. Out of 22 institutions, 14 identified some issues, 9 of which were related to privacy explicitly or appropriate data use. As one interviewee noted:

“We have a part that you can go to in the planning and statistics portal. It allows us to access planning statistics and all sorts of different things. Now, I guarantee you that most people in our faculty don’t even know about it… However, one conversation we often have involves a real lack of clarity around clearance to access certain data.”

The remainder of the institutions indicated that they had not yet thought about specific learning analytics issues, including ethical issues.

Overall there were 353 completed academic surveys with 276 (78 %) of respondents indicating that they did some teaching. The remaining participants were people who worked in academic support or on projects related to learning analytics. Over 80 % of respondents were employed on a full time basis and 70 % had worked in the sector for five or more years. Table 1 provides the key demographics from the academic survey.

Table 1 Frequency distribution of selected demographic data

Of the four ethics-related questions in the academic survey, participants were first asked: Do you have any concerns about ethical issues related to learning analytics? Three hundred and one (301) participants responded to this particular question (meaning not all of the 353 who completed the survey answered this particular question), with 27.8 % (98) indicating that they did, 34 % (120) stating ‘no’, and 23.5 % (83) being unsure. The 181 (98 + 83) participants who indicated either ‘yes’ or ‘unsure’ were asked about their level of concern around specific ethical issues. Figure 1 shows their levels of concern around each item for those who answered the question (n varies slightly on the items, between 177 and 179, due to missing data).

Fig. 1
figure 1

Level of concern about selected ethical issues (responses not mutually exclusive)

Two open-ended questions were asked:

  1. 1.

    What, if any, ethical principles do you think should guide the use of learning analytics?

  2. 2.

    Are there any ethical concerns you have that have not yet been covered or that you would like to expand on?

Responses covered a wide range of issues and suggested principles. Although often expressed in different terms (e.g. prioritising student interests, obtaining consent, being student-centred, voluntary participation), there was a clear connection to the key ethical principles of autonomy, non-maleficence, beneficence, and justice (Cassuto Rothman 2005). Table 2 shows how the various participant responses aligned to the overarching principles, which are marked in bold. The following quotes from the interviews followed a similar vein:

“…one of the key questions would be around whether it was going to be used in a constructive way or in a punitive way”.

“There is a perception that privacy exists. If people just gave up that concept they would be fine”.

“The awareness of students as to how much information is actually collected on them is something I don’t think they do know”

Fifteen participants indicated that it should follow the ‘normal’ rules and key principles of research.

Table 2 Participant views on ethical principles that should guide the use of learning analytics (n = 112)

Discussion

The results show that participants’ views largely connect with established ethical principles. However, the main emphasis seems to be on elements related to autonomy—privacy and confidentiality, informed consent, and the ability of people to use their own data (Pardo and Siemens 2014). The literature has a predominant focus on similar issues (e.g. Cormack 2016; Drachsler and Greller 2016; Prinsloo and Slade 2015; Drachsler et al. 2015). There appeared to be a limited understanding of the potentially complex ethical dilemmas that learning analytics presents.

A considerable number of respondents indicated that ethical considerations should “just be the same as research ethics”, disregarding several complexities in a field situated in a digital context. While learning analytics may be used in a research context (Larusson and White 2014), this is not usually the primary purpose of implementation from an institutional point of view. Rather, learning analytics can be related to a broad range of purposes including the improvement of learning and teaching, business outcomes and the improvement of the student experience (Gašević et al. 2015). This leads to a wider variety of ethical dilemmas, for which established research ethics may not be equipped.

There is an uneasy relationship between ‘consent’ and ‘informed consent’ in contemporary digital spaces, and the extent to which duty of care exists to ensure consent is genuinely informed (Prinsloo and Slade 2015). In the interview, participants talked about how privacy can be dealt with. The idea of opting out is not necessarily possible within the current technical infrastructure and as such, the option is either to consent or to not enrol (although much progress is being made in this space (e.g. van Trigt 2016). Thus, the concepts of ‘consent’ and ‘informed consent’ are conflated. Consent may be signing permission to collect and use students’ data within the context of their involvement in education at an institution. This may include learning analytics data, yet that may not be explicit (Welsh and Mckinney 2015). Alternatively, such consent may be implied in IT policies, underpinned by an expectation that a) students have read the policy, and b) by virtue of enrolling in an institution they agree to abide by the policy.

Either option may not align with the concept of informed consent, as they are at odds with deontological ethical frameworks. Rather, explicit statements about the types of data collection would need to be explicitly included in enrolment information. In contemporary digital spaces, this kind of consent is complex and the ‘informed’ element often problematic, especially in social media spaces (Custers et al. 2013).

Another complication is that different organisational units are often interested in learning analytics for different, and potentially conflicting, reasons (Long and Siemens 2011). Rationalisation of the institution, performance management, and student experience are just a few example angles that came through in the institutional survey. In each case, the ethical framework and the underlying ethical principles around learning analytics data collection potentially change. The picture becomes even more complex if organisational power relations enter this mix. In other words, senior management may have very different reasons for wanting to collect and access various types of learning analytics data than lecturers. As a result, their ethical considerations would have a different impact on the implementation of learning analytics (Willcoxson et al. 2011). Data from this study indicates that academics were very concerned about transparency in the use of learning analytics, as well as how it maybe used by for performance management of staff.

The data from this project suggest that the conversation between these divergent interests, with a focus on ethical principles involved, is yet to take place in most institutions, even though learning analytics is being implemented. So what should such a conversation specifically be about in a practical sense? As stated by Willis et al. (2013, p. 6), “simply put, practical ethics operate on one part of the equation and theoretical ethics operate on the other; the intersection of the two is where we can take action and have the greatest effect.” In this context, it is necessary to consider what this intersection might look like, and how it could be used as a starting point for institutional conversations about ethical approaches to learning analytics.

People who carry out the decision making process are not necessarily aware of ethical principles in the implementation and use of learning analytics. One would hope they would be aware of, and see a connection to, the code of ethics and/or various policies, but again this is not a given (Swenson 2014). The data from this study suggests that at both an institutional and academic level, limited thought is given to the broader range of issues. Only 64 % of institutions identified any issues, and these were within a limited scope, while only 29 % of respondents in the academic survey indicated that they had any ethical concerns related to learning analytics. While 23 % said they were unsure, 34 % said they had no concerns at all. Thus, respondents were either unconcerned or unaware of ethical implications, suggesting a need for guidance. Therefore, a decision making framework is presented here, which is both intended to facilitate ethical decision making and, when applied to different decisions and ethical issues, to help institutions develop well-aligned policies and cumulative ethical literacy.

Framework for ethical decision making

This section describes a simple ethical decision making framework that builds on relevant literature to address the critical need identified in the study. Firstly, Fig. 2 presents a four-step framework that views ethical decision making as an operational process. The aim of this framework is to concisely model how a complex issue can be mapped, refined, decided on, and documented within a fairly linear process that would suit the busy operating environments of most institutions. There may be circumstances where reflection or new information means retracing earlier steps and the framework does not oppose doing so.

Fig. 2
figure 2

Ethical decision making process for learning analytics

The steps in this process are demonstrated with an application to an example issue from the case studies in this project.

Applying the framework to an example

The application of this tool will lead to different pathways but within a clear ethical framework. Here, we apply the tool to an example that arose as part of the broader learning analytics project. The use of social media for learning and teaching has some benefits in terms of social learning, responsiveness to students, and convenience. To extend their understanding of learning and teaching, a lecturer may expect students to use a social media platform, which is integrated with other institutional data to provide learning analytics. The resulting ethical tensions could be related to equity, as well as the boundary between the personal and professional realms, and the needs of an institution versus student rights of choice. To simplify the discussion, this decision is considered as one in which an institution (or rather a lecturer) is deciding if they should require students to use a social media platform that is likely to require the student to agree to a separate set of terms of use, and provide data to a third party.

Step 1 Explore the issue: Map the potential costs/risks/harms and the potential benefits – independent of values of the institutional context. This step should include consideration of the key principles.

Table 3 displays a summary of potential harm, or costs and benefits, in relation to the nominated example. This step helps to uncover a variety of ethical issues, though without an understanding of the institutional context, it is hard to ascribe particular levels of importance to each.

Table 3 Exploration of issues around the use of social media for learning analytics

Step 2 Apply an institutional lens to the issue: Make connections between the identified issues and the institutional context. This allows the benefits and risks to be weighted in terms of the practical realities of the institutional context.

The degree to which an ethical issue is problematic varies across different contexts. In our example this is best highlighted by comparing two different hypothetical institutional contexts. Institution A may tell students at the point of enrolment that they may be required to use social media during their studies. It might also make it clear that data is being collected purposefully to improve learning and teaching in all areas of engagement. Further, it might have policies in place that outline the use of social media for learning and teaching for both staff and students. By contrast, in Institution B there might not be a statement at the time of enrolment in relation to learning analytics data collection and use, or about the use of social media for learning and teaching, nor relevant policies.

Thus, applying an institutional lens helps to define potential harms and benefits in a given setting. In this simple example we argue that in Institution A, where information and policies exist and have been provided to students, the potential breach of autonomy if the decision is made to expect student to use social media is comparatively smaller than in Institution B, as students have been provided with up-front and transparent information prior to enrolment.

Step 3 View the alternative actions (choices) in light of the ethical theoretical approaches. Thought should also be given to improving the quality of the choices in light of the ethical theories (e.g. should something be adjusted or added to make it ‘more ethical’).

From a virtue point of view, being honest and transparent may be seen as an important value to uphold. Thus, even though the learning benefits to students might be better if the choice is made to require them to use social media, this might not be considered a good enough reason to do so because it contravenes the value of honesty.

By contrast, higher value might be placed on the positive learning outcomes for students. This is made possible through the use of a platform (i.e. social media), which is best placed to take advantage of learning analytics. Taking this view reframes the decision because in this instance it is more desirable that students are given optimal learning opportunities. However, it might also be decided that documentation for enrolling students needs to be changed so that they are aware, and so the value of honesty is adhered to in the future as well. In this way an institution can increasingly become ‘more ethical’.

Step 4 Document the decision made with reference to the ethical decision making process and the principles, values and approach that underpin it.

If on balance the decision to implement the use of social media was made, it could be clearly stated that the various issues have been considered, policies and processes have been reflected upon, and it will be implemented it because the learning benefit to students is most highly valued. The changes required also need to be documented to provide greater alignment with the virtue of honesty and the principle of autonomy. Through this ongoing process of placing ethical decision making on the institutional agenda, aligning policy and practice and documenting work carried out so that it can be reviewed and refined, it is expected that over time institutions and staff will become more cognisant of the important role of ethical decision making and more adept at applying it.

It is clear that ethical decision making in relation to learning analytics is complex, particularly in terms of how it can be enacted in practice. Further issues will be unearthed as the field progresses, requiring identification of additional principles to be incorporated. However, of central importance is the ability to make ethical decisions about complex questions in an increasingly competitive and dynamic sector (International Council on Distance Education 2015) and within an increasingly technologically enabled and integrated environment. Many entrusted with making such decisions in relation to learning analytics are not necessarily knowledgeable about the principles and theories that should guide the process. The question thus becomes how ethical principles are integrated throughout institutions and across different organisational units. The key lies in the integration of ethical principles and the consistency of their application.

Relating this example to the theoretical foundation we have discussed in the earlier part of this paper has shown that there is a variety of work around key principles, legal elements, and codes of practice. However, these tend to tend to focus on the big picture and broader decision making processes, not necessarily on the small implementation decisions along the way, nor on the users who are tasked with the everyday decision making process. For example, our example has shown importance recognising that everyone has values and positions on ethics and these are not necessarily the same. Furthermore, if you apply a different ethical theory (e.g. utilitarian, deontological, etc.) to the same situation with the same principles (informed consent) it does not lead to the same answer, and thus it is not a simple and predictable process. Finally, context is critical, which is the key point. For example, values will fundamentally change the dialogue (e.g. different teaching and learning approaches will lead to different questions and decisions). Overall then, there is a need for a decision making process/framework for all the different people who will make decisions along the way, from the IT implementation team to the teacher who decides to use social media which may not be part of the ‘normal’ practice of the institution.

The process of becoming an ethical institution where learning analytics are concerned thus requires a systematic, iterative and transparent approach. The systematic aspect relates to carefully considering the costs and benefits of particular decisions in light of institutional context and ethical theories and principles. The iterative aspect relates to a decision making approach that feeds back into institutional policies and processes to make them more ethical over time. Transparency relates to documenting decision making processes and articulating the institutional values that underpin them. As such, the process can assist in developing essential ethical literacy and practice.

Limitations

It is acknowledged that this study represents a relatively small sample of participants from one region. However the findings from the study align with the concerns raised in much of the learning analytics literature. With the growth of online learning and teaching and accompanying the collection of student data internationally, this study provides an important insight from critical stakeholders and reinforces the need for further work to be done in educating those who will play a central role in the use and application of learning analytics, especially with regards to ethics.

Conclusion

This paper has reported on findings from a group of questions related to ethics as part of a broader Australian study about learning analytics and student retention. The findings show that an understanding of ethics related to learning analytics, and approaches to the application of ethical principles in the implementation of learning analytics, are still very much in their early development. Consideration of this topic is generally characterised by a combined lack of awareness and consistency. Given the complexity of learning analytics and the possible ethical implications, as well as the importance of context, the findings of this study indicate a need for explicit institutional ethical decision making frameworks to guide important learning analytics decisions. These need to focus on the process of decision making to complement the various codes of conduct (Sclater 2016) and/or principles that are emerging in the literature (Drachsler and Greller 2016; Prinsloo and Slade 2015; Nelson and Creagh 2013). Overall, it has been argued that ethical principles should underpin institutional decision making in relation to learning analytics and should guide and underpin the implementation and application of analytics in an integrated manner across higher education institutions. This requires a multi-layered approach where values, expectations and actions are aligned and made explicit. The framework discussed in this paper provides a starting point in this process.