Introduction

The choice to adopt technological innovations is complex and dependent upon a variety of environmental and individual factors. As a result, there is often a notable time lag between the development of a technology and its broad adoption by users (Balcer and Lippman 1984; Privateer 1999). Time lags are influenced by the environments and cultures of organizations and by the perspectives, behaviors, and attitudes of individuals (Rogers 1995; Straub 2009). This lag exists in higher education, which has changed rapidly in the last decade with the advent of educational technologies (ET), including enrollment, learning, and advising management tools. These tools are increasingly leveraging learning analytics (LA), or educational big data, as the ability and capacity to mine large amounts of educational data has also increased; yet they are often not broadly adopted or used to their full potential (Dahlstrom et al. 2014; Peña-Ayala 2014).

LA “is the measurement, collection, analysis, and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs” (Siemens 2011, p. 1). LA is further defined by the volume of data that is available for analysis, the variety of that data, and velocity of which that data is conveyed to users (Daniels 2015). LA’s potential to provide timely, visualized, predictive feedback and interventions has resulted in LA technologies moving to the fore, as higher education is deploying LA-informed learning and advising management tools (e.g., Blackboard learn ultra, Moodle, Degree compass, and student success collaborative) to improve student learning, retention and success (Arnold and Pistilli 2012; Dahlstrom et al. 2014; Picciano 2012; Viberg et al. 2018).

Despite the potential benefits of LA and the decision by higher education institutions to encourage faculty and staff to adopt LA tools to improve organizational and individual outcomes, little empirical evidence of improved outcomes related to use of these tools exists (Dawson et al. 2017; Papamitsiou and Economides 2014; Viberg et al. 2018). Further, adoption of LA technology is not universal (Brown 2016; Dahlstrom et al. 2014; Rhode et al. 2017). The barriers and incentives related to LA tool adoption by faculty and advisors is unclear, but research related to technology adoption and faculty pedagogical decision making indicates that the ability to gain trusted, accurate and relevant data that aligns with users’ needs are key factors (Ali et al. 2012; Brown 2016; Davis 1989; Svinicki et al. 2016; Venkatesh et al. 2003; Zellweger Moser 2007a). Adoption levels can also be hampered by organizational capacity and readiness factors, to include variable technological resources and infrastructure (Norris and Baer 2013; Oster et al. 2016). Thus, adoption levels of LA technologies range by the individual and their environment.

The purpose of this study is to understand how the technological barriers and incentives associated with use of LA tools shape faculty and advising staff decisions related to tool adoption and use. As such, the following research questions framed the study:

  1. a)

    How and to what extent do faculty and advisors use LA tools?;

  2. b)

    What is the nature of the relationship between faculty and advising staff needs and LA tool capabilities and components?; and,

  3. c)

    How do faculty and advisors respond to existing technological incentives and barriers of LA tools?

Conceptual framework

The conceptual framework for this study provides a foundation for understanding how faculty, within the context of higher education institutions, make technology adoption decisions. Zellweger Moser (2007a, b) faculty educational technology adoption cycle (FETAC) model, in conjunction with the general technology adoption literature and literature on faculty use of LA tools, underscores the importance of considering individual behaviors and organizational incentives when implementing new tools for faculty adoption. This framework also provides a useful lens from which barriers and incentives related to user adoption levels can be understood, as they explain the changeable perspectives, experiences, behaviors, abilities (perceived and actual), and environmental pressures (including social pressures and infrastructure support) that can impact adoption. Given the paucity of literature related to advisors’ adoption and use of LA tools, the literature is extended to include this population, as academic advising has long been recognized as a form of teaching (Appleby 2008; Crookston 1994; Hagen and Jordan 2008).

Technology adoption models

At the center of this framework is Zellweger Moser’s FETAC, which explicates faculty considerations for adopting a new technology into their teaching practices (Zellweger Moser 2007a, b). Zellweger Moser’s model is used to ground this study as it speaks to the unique context of technology adoption by faculty in higher education. That is, the challenge of integrating new technologies into complex and siloed higher education structures within a professional bureaucracy composed of faculty with varying values, goals, trust-levels, and available time to adopt new teaching technologies (Austin 2011; Brown 2016; Kezar and Lester 2009; Klein et al. 2019, in press; Lester et al. 2017; Svinicki et al. 2016).

Zellweger Moser’s FETAC works to explain the faculty behaviors that guide interest in and resistance to incorporating new technologies into practice (Zellweger Moser 2007a). The key behaviors of the FETAC, which are influenced by organizational context and individual experience, are time commitment, competence development, course design, teaching/learning experience, and reflection. Time commitment is shaped by individual characteristics (values, “innovativeness,” experiences, goals) and an organizational incentive structure; competence development (via training) and educational technology course design is influenced by organizational resources and support; the teaching/learning experience is impacted by trustworthy technological and organizational infrastructures (or lack thereof); and reflection is informed by peer and student feedback (Zellweger Moser 2007a, p. 66). The model is iterative, with each cycle informs subsequent cycles.

Adoption of new technologies is dependent upon extrinsic and intrinsic support, via organizational structures, resources, and incentives and individual interest and time (Zellweger Moser 2007a, b). Core to well-designed technologies that can establish interest in technology adoption are strategic use of institutional resources and support, including formal incentives, professional development, and appropriate technological infrastructures (Norris and Baer 2013; Zellweger Moser 2007a, b). As noted in previous work by Venkatesh et al. (2003), in their unified theory of acceptance and use of technology (UTAUT) adoption model, social influence (level of perceived importance of adoption by important others) and facilitating conditions (level of belief that “an organizational and technical infrastructure exists to support the system”) (p. 453) are integral to individual adoption of technologies.

Clearly articulating the connection between faculty work and technological abilities is also an important aspect of technological competency development and adoption. The technological pedagogical content knowledge (TPACK) adoption model has shown that educators are more likely to incorporate technologies and to improve their competency with technology, when they are able to see where pedagogical knowledge, content knowledge, and technological knowledge can convene (Mishra and Koehler 2006).

Connected to time, resources, and support is the need to have well-designed technologies that are aligned with users’ needs and useful to their work. When technologies are misaligned or ineffective the teaching and learning experience is impacted, which leads to mistrust, negative assessments, and lack of use. Technology must not only be both easy to use and useful to the work of individuals (Davis 1989), but also must prove to be more efficient and effective than current practices in order for individuals to consider adoption (Venkatesh et al. 2003). Further, “negative faculty experiences travel fast and influence the opinions of the larger community” (Zellweger Moser 2007a, p. 67). These experiences and their influences can reduce the time faculty are willing to devote to adopting or integrating a new technology. Thus, the adoption cycle begins, again, but with potentially diminished time commitment by faculty users.

LA tool use literature

The literature related to adoption or rejection of LA tools by faculty and advisors is relatively scant. The bulk of studies instead focus on student outcomes of use of LA tools by faculty, advisor, and student participants. The majority of work specific to faculty use of LA tools focuses on the barriers to adoption of these tools, namely: lack of clear, relevant, timely or trustworthy data. For example, Ali et al. (2012) designed a LA feedback tool related to teaching and learning and asked study participants about the tool’s value (Ali et al. 2012). The researchers found that the tool’s feedback and data visualization were valued by faculty, who were able to, consequently, identify potential areas for change or improvement in their pedagogy (Ali et al. 2012). Faculty deemed the tool useful, only if the feedback was straightforward and relevant and the interface was easy to use (Ali et al. 2012). Dawson et al. (2008) also found that data visualization influence faculty adoption by “constrain[ing] staff understanding of the linkage between student online interactions and implemented pedagogical approach” (p. 228).

Lockyer et al. (2013) argue that this linkage is necessary for faculty use of LA systems. To be effective, LA tools must “bridge the technical–educational divide” and offer users real-time and relevant feedback on teaching, learning, and desired “educational outcomes” (Lockyer et al. 2013, p. 1446). However, bridging the technological divide, regardless of educational technology tool type, is often difficult when questions about data come into play. In their 2014 empirical study, Hora, Bouwma-Gearheart, and Park found that even when data-driven decision making was understood as valuable, there are specific barriers and incentives in place related to the use of data by faculty. Barriers included: “lack of expertise with educational data and perceived poor quality of data” (Hora et al. 2014, pp. 18–20). The barriers described by Hora et al. are echoed in an Educause report by Dahlstrom et al. (2014), who found that although the LMS was appreciated by faculty, as a means to improve teaching and learning, few used advanced LMS components, and “even fewer use these systems to their fullest capacity” (p. 10). This lack of use may be due to, as a faculty member reported, LMS systems that lack personalization, are poorly designed, and are often difficult to use (Dahlstrom et al. 2014, p. 10).

The lack of personalized tools for use by faculty and advisors creates a real barrier to use, as Aguilar et al. (2014) found in their study of advisors use of an LA-based early warning system dashboard. Although advisors in this study used the dashboard throughout the term of study, they reported the tool making no significant difference in their work. Further, midway through the study they realized that the advisors used the dashboard during advising sessions, an “unintended use” as perceived by the dashboard designers that required a redesign of the dashboard to meet advisor’s needs. Again, the importance of tools that are efficacious, effective, and useful to users are more likely to be adopted by them into regular practice (Davis 1989; Venkatesh et al. 2003). This was true in the case of course signals (CS), one of the first predictive LA tools with an early alert component. Faculty users noted that their ability to connect with and support students was enhanced, making use of the tool valuable to their pedagogical practice (Arnold and Pistilli 2012).

Recent work has underscored the importance of including users of learning analytics as a means to develop more relevant tools and experiences. In their investigation of first-year engineering students and instructional faculty use of learning analytics (Knight et al. 2016, 2018), found that students and their faculty have specific ideas about how data should be collected, used, and received. Specifically, faculty wanted learning analytics dashboards that could help them manage classroom experiences and interactions, while students were more interested in holistic management of their learning, from time management to major selection and management. Based on this work, the researchers argued that including students and faculty in the development of data and learning dashboards was more likely to result in “better informed development of and, ultimately, sustainable use of, learning analytics-based models and dashboards” (Knight et al. 2016, p. 215).

The value of this using a conceptual framework that provides an understanding of the individual and organizational incentives and barriers related to technologies that inform both perception and adoption of learning analytics tools. These incentives and barriers include personal and institutional commitment to technologies, reliable and supported infrastructures, technological relevance and effectiveness, and user reflection and assessment points.

Methodology

During the summer of 2015, we conducted a qualitative, instrumental case study design at a large, public, research university in the mid-Atlantic region of the United States. An instrumental case study focuses on a specific phenomenon, which is designed around a specific theory and literature and bounded by time and place (Stake 2005). Instrumental case studies are conducted to “provide insight into an issue” (Stake 2003, p. 137) as a means to deepen understanding of a phenomenon—in this case, LA tool use and its technological barriers and incentives. Because we as a research team were working from a constructivist perspective, Stake’s (2005) approach to case study design provided a flexible approach to the study’s design, data collection, and data analysis, as it is predicated on the notion of the existence of diversities in ontological and epistemological perspectives informed by a theoretically- and/or empirically-grounded conceptual framework that provide interpretations, not causes, of the phenomena under examination. This focused our attention on providing an interpretation of the phenomenon (i.e., LA tool use) under examination from the varied perspectives of our participants and within the specific context of their campus during the time our study was conducted.

Our choice in site selection was determined by the goals of an overarching, externally funded, exploratory research project focused on the creation of a new LA-based teaching and advising tool for faculty and advisor use at the study site. That project, which was comprised of a multi-disciplinary team of researchers, including ourselves, was guided in part by a goal that also informed the inquiry for this related study, which was to better understand the context in which and reasons why individuals in higher education organizations choose to adopt or reject LA tools. We selected the study site because the administration, at various levels and in various departments, has purchased several different LA-informed tools to support student retention and completion. The university has a long-held commitment to student-centered teaching practices to support their large and diverse student population. Further, administrators promoted LA tools to increase faculty, advisor, and student adoption levels, with the assumption that increased use will improve outcomes. In this regard, the institution was at a stage where individual faculty and advisors were choosing whether or not to engage with LA tools. Importantly, many of the LA tools that were being purchased were not integrated with each other or with other technologies on campus. The disconnect between these systems was due, in part, to the decentralized structure of the university, with multiple sub-units who were empowered to purchase and implement tools independently, and to the proprietary nature of educational technologies. Within this context, we sought to understand user decision making related to these tools within the context of this institution at the technological, organizational, and individual levels.

Sample selection

Due to the focus on faculty and advisors, we targeted individuals in those roles who represent different sectors and constituent groups. Because the university is highly decentralized, with siloed colleges and departments, teaching and advising roles often overlap. Many faculty members are the primary advisors to students in their departments and many advisors have taken on teaching roles. We focused on faculty and advisors, who represented multiple colleges, schools, and departments. Represented units included student affairs, international student support, engineering, humanities and social science, education, visual and performing arts, and business.

After ethics review board approval, we contacted both formal and grassroots leadership across the campus to recruit participants through key teaching and advising listservs and social networks. We sent emails to the university’s advising network, main university advising center listservs, and the faculty and teaching center listservs, advertised in the online weekly campus newsletter, and contacted key faculty and advising staff on campus. At the conclusion of the first focus group, we continued recruitment using snowball sampling to solicit participants, via email and in-person invitations. In all, 32 participants took part. While complete institutional representation via our sample is impossible, given the size of the institution and the nature of convenience and snowball sampling (Creswell 2002), an effort was made to find a diverse group of participants who had varying years of experience from a variety of disciplines and departments (see Table 1).

Table 1 Study Participant Table

Data collection

Our data collection was comprised primarily of focus groups, with seven groups of 2–10 participants, and lasting approximately one hour. Six faculty and 21 advising staff members participated in the focus groups. Groups were generally divided by faculty and by advising staff participants, so that we could focus on the particular incentives, barriers and behaviors related to LA tool use. Although groups were segregated by roles, participants represented mixed levels of proficiency and LA tool adoption. The focus groups were audiotaped and transcribed. An observer also took notes during the focus group. The interview questions and prompts focused on three key themes: (1) the relationship between teaching/advising philosophies and approaches to use of LA tools, (2) how LA tools are, are not, or can be used to support teaching and advising, and (3) how institutional and technological incentives and barriers operate to help or hinder use of LA tools. Questions included: How/do you incorporate LA technologies in your teaching/advising practice? What are the incentives and barriers related to using these technologies? For this analysis, we focused on the technological barriers. Other, related work identifies the institutional and individual barriers and incentives related to LA tool adoption (Klein et al. 2019, in press; Lester et al. 2017).

We also conducted one-on-one interviews with five participants (two faculty and three advisors, see Table 1) using the same criteria as focus group participants, which followed many of the semi-structured questions within the focus group protocol. During these 45-min interviews, we asked participants to use whatever LA tools they normally used in practice. For instance, advisors ‘walked us through’ a typical advising session with a student via technology and faculty did the same for students in their classes. As they did this, we conducted participatory observations of them interacting with various tools and their components. This allowed us to see which tools faculty or advising staff use to inform their teaching and advising. Participants walked us through a typical session on the systems, talking about how they used its components. We asked participants about which systems they used, which systems were helpful in their work, and how they used these systems to teach and/or advise students. We also asked about any areas of constraint or frustration that they encountered in use of the systems and any components of the systems that were misleading or unhelpful. Three academic advisors and two faculty members participated in the interview and observation sessions. These sessions were both audio and video recorded and transcribed.

Data analysis

Consistent with methodological norms of qualitative inquiry, the systematic coding of texts (i.e., focus group, interview transcripts,and observation notes) serves as the primary means of data analysis (Mertens 2005; Strauss and Corbin 1990; Saldaña 2015). We conducted open and axial descriptive coding on the focus group and observation transcripts using the qualitative software Dedoose. We created deductive codes based on a review of the literature and theory—faculty decision making, behavior change, and learning analytics—and additional inductive codes that emerged from our analysis of the data. Example codes included: type of use, trust, accuracy, ideal system, risk, frustration, benefits, and needs. From those codes and others, we derived the themes (e.g. infrastructure impacts assessment of LA tools, user evaluation impacts adoption, and technological structures can act as incentives to better adoption) as noted in the “Findings” section of this paper.

Trustworthiness and validity

We used several methods in order to ensure trustworthiness within the case study (Lincoln and Guba 2000; Stake 2005). First, we employed two coders that compared their interpretations of the emerging trends and coding of deductive codes within Dedoose. We conducted coding separately and then compared codes and discussed discrepancies to fine-tune interpretation. Second, we engaged in member checking by asking clarifying and summarizing questions of participants during focus groups and interviews. Third, we collected data from multiple sources, including focus groups, individual interviews, and observations. While the focus groups provided the majority of the data for this study, the individual interviews and observations helped to clarify our interpretation of the focus groups data to create a fuller picture of LA tool use, incentives, and barriers. The observations were an opportunity to confirm and more deeply understand the complexities of using LA within the context of the organization. Fourth, we included individuals from across the campus to get a fuller picture of the various organizational levels related to adoption and use of LA tools. We focused on interviewing individuals who were in different departments, had different roles (advisors versus full-time faculty versus contingent faculty), and had been working at the institution for varying numbers of years or decades.

Limitations

There are several limitations to the research design that impact the study conclusions. First, we used a case study design, which is not intended to be generalizable to other institutions of higher education. The findings of this study represent the perspectives and experiences of this campus and the particular LA tools in use on that campus. Although not generalizable beyond the study site, the consistency of the technological barriers and incentives noted by faculty are likely to be informative for any higher education campus that uses LA tools. Second, while the study sample does capture the experiences of participants, not all units or departments are represented and our sample skews both toward both females and advisors. Despite this imbalance, we found no difference between male and female participants when we analyzed the data. Differences between faculty and advisors were primarily focused on the types, components, and incentives related to use of these tools. Third, while we, the researchers, have a relationship with the institution and use LA tools in our work practices, we have not worked directly on acquisition or management of any of the LA tools used at the university. Our purpose is not to advocate for LA tool use or adoption but rather to understand, from participants’ perspectives, what technological barriers or incentives exist that inform their decision to use and integrate LA tools. Despite these limitations, important insights can be gleaned from the findings, which are explicated in the “Discussion” section.

Findings

Findings from this study indicate that adoption and use of LA tools is influenced by user assessment of both LA data and technological infrastructures. Both faculty and academic advisors acknowledged that there was a place for LA tools in their teaching and practice, but only if the data provided by the tools were accurate, understandable, efficacious and aligned to their needs and pedagogical perspectives. Specific technological barriers noted by faculty and academic advisor adoption were manifold. The strongest factors were related to a lack of integrated, accurate, and timely information; a lack of confidence in the predictive nature of analytics; a concern about prior data provided through LA tools engendering bias; a lack of useful interfaces and visualizations; and a lack of flexible components to support varying user needs. Reported technological incentives and support systems were minimal, although participants did note what an ideal LA tools system with associated incentives and supports could entail. This section of the monograph focuses first on existing technological infrastructures of LA tools and the study site that limit their use and influence users perspectives of these tools. From there, we focus on the concerns specific to LA tool use, including prediction, surveillance, and bias, and we conclude with the findings on technological components and infrastructure that could improve faculty and advisor adoption and use.

Technological infrastructure influences LA tool assessment

Lack of a trustworthy data infrastructure (due to the proliferation of multiple and un-integrated campus technologies, which were deemed cumbersome and misaligned with user needs) was among the most commonly reported barrier to adoption by study participants. Among infrastructure barriers that participants noted were a frequent lack of technological integration and accurate and timely data. Importantly, participants noted that infrastructure issues were a constraint regardless of type of educational technology tool they used. However, given the purported advantage of LA tools to specifically provide more accurate, timely, and visualized feedback, when barriers to use emerged, they created an assessment point for participants, from which they decided whether or not to trust data from that LA tool feedback.

Lack of integration

Participants used multiple educational technology tools as a part of their teaching and advising jobs. An academic advisor explained the tools she regularly uses:

So, you have [an institutional LA tool, a degree progress tool, a student information management tool], and then your e-mail and whatever. So, you have to be relatively savvy with all of those programs in order to use them and know which one is providing you which information…yeah, you’ve got to have a lot of stuff.

Participants repeatedly noted the need to use multiple systems, due to a lack of integration between technologies on campus. The lack of integration of technologies within the organization’s technological infrastructure was by far the most common barrier reported by participants.

All participants, but especially academic advisors, who reported having to have multiple monitors and online screens open in each advising system, acknowledged that lack of integration makes use of these systems cumbersome, time consuming, and even unreliable. The log-on requirements for the various LA and other tools often differed. One system would require part of the student ID, another used only names, and another needed a combination of identifiers. The necessity for participants to have multiple logons to access each system differently, reduced their ability to efficiently assist students or approach their workload.

For academic advisors in particular, the need to use and log-on to so many systems to advise one student, made it difficult to have the time to use each tool to its full potential:

So, yeah, just all of those different, you know, programs and even just taking the time to be able to check grades to see eligibility and stuff like that. I’m trying to remember all the intricate things of all the programs we do. There are so many different programs, and different things that we need to try to learn, and it’s really difficult to master each one.

Cumbersome infrastructure was a significant factor for the academic advisors in the study who use LA tools. Many academic advisors noted that their student caseload was over 200 students and that they often could only dedicate 15 min for each student appointment. This meant that any additional time to assess student progress would hamper their ability to help students in a timely and efficient manner. Consequently, the more time it takes to master or use a tool, the less likely they were to use that tool.

Faculty were also concerned about time, with multiple faculty, especially tenure track faculty, noting that the priority was to publish rather than taking the time to learn about the components and capabilities of a new tool. For instance, a tenure track assistant professor noted that she knew the basic function of the institution’s LA-based early warning tool, but that she also is “probably using maybe 10% of what the program can do, because that’s all I have time to really do.” The cumbersome infrastructure created an environment in which participants time was further constrained and, as a result, if they decided to use a tool, it was often not to that tool’s full potential.

Lack of accurate and timely data

Issues related to lack of integration and associated time constraints are compounded for participants when data from non-integrated systems appeared to be inaccurate or were not updated regularly. An academic advisor spoke to the issue of having to manually refine lists generated by the institution’s LA-informed advising tool that provided inaccurate data (data visualized in the tool that had not been synced with institutional database and was therefore out-of-date):

And so, one day I spent like two hours just cleaning up a list that I had. And it took so much time away and that’s not really conducive to our work and the time that we have—or the limited time that we have. And so, it’s just—if it—if it keeps doing stuff like that then we’re just not going to be wanting to continue using it.

Lack of accurate data, especially data that requires user refinement or costs users time, creates a negative assessment point by the user, impacting their trust in the technological design and infrastructure of the tool and their decision to continue using it.

Further impacting perceived trustworthiness of the data was the variable real-time accuracy provided by LA tools and within the context of the university. All of the universities ET tools update every 24 h, making it difficult to give accurate advice to students in the moment. For instance, if a student were to change his/her schedule the morning before an advising appointment that information would not be available to academic advisors in time for their meeting later that day. The lack of real-time updating hampers user’s interest in incorporating these tools into their practice. The decision not to use tools because of a lack of real-time, accurate information becomes more pronounced during busy times of the academic year, like orientation. An academic advisor explained:

At orientations you often pre-advise. So, you can only do that based on what is in the system. I’ve also seen students who have talked to me [after orientation] be like, ‘Oh, well three of the four classes you recommended I already took. So, I only signed up for one class.’ And you’re like, ‘Oh.’… There’s always going to be [students] that fall through or re-sign up for the same class just because we recommended it, but we have no way of knowing that they’re taking it right now and over the summer, even in the spring semester because it hasn’t been evaluated yet [by the tool].

The lack of integrated systems and accurate, real-time data in a single, easy-to-use tool created significant barriers related to trust for participants. Almost all participants noted that the various LA tools on campus contained inaccurate information due to complicated individualized programs of study, changes to college catalogues over time, and new degree requirements coming from university departments or accreditation agencies. Simply, there appeared to be a mismatch between the complexity of the technical aspects of the system and degree programs in higher education, making the tool useless to many participants.

Lack of clearly visualized data

The lack of integration, accurate data, the time it takes to use these tools creates an evaluation point for users. This evaluation point is often associated with how the data is visualized and communicated in LA tools. Unclear and inaccurate visualizations led participants to ignore components of LA tools, augment the tool with outside or homegrown alternatives, or discontinue their use altogether. Regularly, academic advisors noted that they ignore, and encourage student users to ignore a particularly confusing (and often inaccurate) data visualizations. An academic advisor explained their frustration with a degree progress visualization that was a part of an LA tool they used for advising, “And so we both look at it and we have a good laugh and I tell them, ‘Just ignore it.’” If data is visualized in a way that is confusing it not only spurs mistrust, but also can lead to a complete lack of use. Another academic advisor noted the challenges of poorly visualized information:

…the way that it looks is so daunting that you just really avoid going [onto the tool’s dashboard] unless—you absolutely have to (laughs). I’ve even tried to, like, make [the tool’s dashboard] change the colors and stuff just to, like, make it look appealing. But it really—I mean, still the shell is the same and then it’s just not appealing.

Importantly, data visualization as it relates to organization, ease of use, and accuracy matters tremendously to the user. If faculty and academic advisors are unable to make quick, clear, and trusted analysis of student progress and success via the data visualization within LA tools, they dismiss them. The potential for LA tools to provide real-time, visualized, and actionable feedback to help faculty and advisors teach and advise students was repeatedly hampered by technological infrastructure that exists at the university and by the limits of the tools, themselves.

Ethical concerns influence LA tool adoption

Trusted analysis extends to trusting the algorithms and intent behind the LA tools, as well. This was particularly true when participants considered using LA tools for predictive or surveillance purposes or when participants considered how access to prior student performance data might unduly influence their assessment of current performance. Issues related to the potential impacts of predictive data, the ethics associated with perceived surveillance mechanisms, and the potential for increased bias were concerns associated specifically with LA data and tools.

Use of predictive data

There was a notable lack of trust extended to the predictive nature of LA. Multiple participants, both faculty and academic advisors, spoke of the “risky” nature of applying predictive algorithms to student performance, retention and success. A faculty noted, “[Predictive analytics] sounds dangerous to me because there are a lot of weird data pieces there, you know.” In order for participants to trust the ‘weird data pieces’ associated with they felt they needed to understand how the algorithms produced the predicted outcomes. Unfortunately, faculty and advisors were often given access to LA tools and data without that context.

This absence of context created concerns about using unclear data, which was not completely trusted and could have negative consequences for students. Participants wanted to know how the algorithms underlying the tools they used were developed. They were also concerned about how data might be used to shape future outcomes. An academic advisor noted the potential impact of this risk of relying on predictive data to shape future outcomes:

To me it’s risky to predetermine somebody’s ability to succeed based on [an algorithm]. It’s risky. I think if you have good academic advisors using that data and helping a student kind of navigate some of those questions, then it can be helpful. I think if you have people who are only looking at the data and then making decisions for students based on that data you run into tracking issues which is where we see under-represented students pulled out of high schools early and tracked into, you know, tracked out of college early.

The riskiness of using predictive data, no matter how accurate, was a real concern for participants. Universally, both faculty and staff were concerned that data from LA tools that predicted student outcomes could be used to limit, rather than inform, student potential.

Concerns about surveillance and bias

Using prior data, available to faculty and advisors via LA tools, was also a point of concern, especially for faculty. A faculty member noted concerns related to the surveillance aspect of these technologies:

Sometimes I feel like, even my going on this one little performance dashboard and sending that e-mail, “I’ve noticed that you haven’t accessed [the tool] for three days.” I’m like, “Oh, it’s so like Big Brother.” Like, “I’m watching you.” I’ll admit, I’m a little uncomfortable with you know, is it my job? Yeah, I guess because I care, because I’m a teacher. Is it really my job?

Although this faculty member wanted to use whatever tools and support mechanisms they could to support their students, the ability to monitor students outside of class was an uncomfortable aspect of LA technology.

In addition to surveillance issues, participants also had mixed feelings about how access to LA data, inclusive of prior academic experiences, could engender bias. Another faculty member explains:

I can see that [having prior LA data] could prejudice you by saying, ‘oh this is a lazy so and so’ student,’ right from the get-go. On the one hand [access to prior data] is very useful. And in some ways if I have the information about all the students I feel like I’m less likely to be biased. Right? Because it’s really sort of as a class and I’m thinking as a class dynamic. I’m not just thinking, “Oh, this one student is the troublemaking student.”

For all participants, the existence of LA data, whether used to predict, surveil, or inform needed to have a clear purpose for its use. As an advisor explained, “I found that different pieces of technology can be very useful but there has to be sort of a meaning behind it.” That meaning, for all participants, was rooted in the actionable data that was associated with the LA tools they used.

Comfort with actionable data

Across the focus groups and interviews, participants reported being more comfortable in using actionable data—data related to current course work and assessments, transcript review and degree planning, or determining next steps than in using data to predict success or failure. Their concern related both to trusting the algorithms behind the data and the desire to not be a “dream destroyer.” Detailed in another paper (Klein et al., in press), faculty and academic advisors worked from a philosophy of student support and a belief that encouragement could help build student self-efficacy to lead to success even if their course-taking history indicated otherwise. Thus, participants often viewed the use of non-transparent, predictive algorithms to determine student success with questionable or unclear associated visualizations as being in conflict with their philosophical approaches to teaching and advising.

Technological structures as incentives for adoption

Participants noted relatively few technological incentives or support structures related to use of LA tools. For active users of LA tools, the incentives were generally intrinsic, and few institutional incentives or support mechanism were provided. Our recent paper (Klein et al. 2019) investigates the institutional barriers and incentives related to LA tools. Participants provided many ideas for valuable components and technological incentives that they believe should exist in an ideal system, in order to encourage broad adoption and consistent use.

As expected, the greatest wish of users, across the study, was for an integrated, intuitive, accurate, real-time tool that could “talk” across platforms and would include seamless email and note-taking components. Above all, accuracy is deemed important, as one academic advisor explained, “It has to have high accuracy. And like even with a degree evaluation, it can be very helpful, but it’s almost always never 100% correct. And, so, the average student, you want him to know it’s there, but they also got to know how to read it.” The student facing tools, in the perceptions of faculty and academic advisors, required strict accuracy. Seeing nervous and anxious students and a concern for the impact of the data visualization on student self-efficacy led to the belief that any errors could reduce student retention and success. With large class sizes and advising caseloads, managing those reactions may not be possible.

Creation of a flexible, open-sources tool that could be shaped to a user’s role and needs was another strong recommendation by participants. A faculty member explained the benefit of flexibility to course design, “So to be able to pull in Google Drive, for example…without breaking the entire structure, that would be perfect.” Faculty and advisors wanted LA tools to be open-sourced, able to incorporate Skype or Face Time or specific online pedagogical tools that they used, as a way to better teach and advise their students.

In addition to a desire for greater integration and flexibility of LA tools was the desire for useful and simple components to assist with student interactions, including: transcript and prerequisite visualizations, catalogue integration, GPA calculators, and appointment schedulers. An academic advisor explained the allure of quick checking tools like these, “You know, so I’m doing a hundred and one calculations. If there was a way for me to say, “Okay, check…” You know, maybe even like a check box. Like check, check, check these courses and maybe calculator that can tell me the GPA.” Although these simple components were valued, so were more complex components related to understanding the whole student.

Both faculty and academic advisors noted that, despite concerns about bias or surveillance, they would like a more holistic understanding of their students and their lives outside of the classroom. A faculty member noted that she wanted,

…the life-long view of the student from cradle to grave. I mean, perhaps I’m dreaming. But that would be perfect in terms of how to advise them because, I mean, life happens. They could have started out in one direction. Something happens in life and it would be better maybe to steer them or show them their options for something else.

Similarly, another faculty member spoke to the need to tailor LA tools so that she can have a deeper understanding of her students in order to better support them in their learning:

If we could have a comprehensive view [of the student], it’s not just great for assignments. But how students approach a course itself, how they approach learning, how they approach habits, what’s their personality… This is a huge project probably. We can’t do it right now. But in the long run if we can come up with something big like this, like an athletic coach, for instance… They approach every single area of an Olympic champion. And that would probably be very successful.

A holistic view of students was desired by faculty and academic advisors, who valued the idea of knowing more about the whole student—their learning preferences, their co-curricular activities, and their self-perception related to coursework and degree progress. Yet, as discussed previously, some faculty and academic advisors were concerned about data security, privacy and subconscious bias being created by knowing more about students than what is offered by academic data. What data is incorporated, visualized, and available to academic advisors and faculty, and their students, requires thoughtful consideration and additional analysis.

Discussion

LA tools, especially those informed by learning analytics, have the potential to leverage institutional data to better support student learning, retention and success. Past studies have argued that the lack of alignment between technological capabilities and users’ needs and expectations can hamper innovation, including LA tool, adoption (Dahlstrom et al. 2014; Hora et al. 2014; Lockyer et al. 2013; Rhode et al. 2017; Venkatesh et al. 2003; Viberg et al. 2018; Zellweger Moser 2007a, b). While many of the findings in this study confirm prior studies, this work also extends the literature by providing insight into technological barriers of LA tools from the user’s perspective. Given the historical paucity of data related to the technological barriers and incentives, reported by the faculty and advisors who use these tools, this study extends the literature related to LA tool and innovation adoption by identifying and explicating the specific technological barriers related to adoption by faculty and academic advising staff. Particularly, this study highlights the types of disconnect that have developed between these innovative technologies and user needs and the impact that disconnect has on user decision-making. A major contribution of this study is the extension of the literature to better understand the factors related to assessment and decisions to use these tools. Among the findings discussed here are the impressions that barriers make on LA tool users, how those impressions can influence action, and the necessity for including users in the development and deployment of these tools for more relevant and reliable use.

Barriers inform technological impressions

Our findings align with many of the components in the Zellweger-Moser’s FETAC model, the TAM (Davis 1989), and the UTAUT (Venkatesh et al. 2003), which identify infrastructure, time, ease of use, usefulness, personal perspectives, experiences, behaviors, and abilities (perceived and actual) as impacting adoption of new technologies by acting as barriers. Zellweger Moser (2007a, b) argues that individual characteristics determine the level of time and commitment faculty are willing to contribute to incorporating LA tools into their practice. Further, we found that inaccurate, non-integrated, or untimely data ultimately led to uneven and reduced levels of adoption. This finding is also consistent within the literature (Dahlstrom et al. 2014; Hora et al. 2014; Lockyer et al. 2013); however, our study explains how these barriers impact adoption levels.

Specifically, lack of integration leads to inaccurate and mistrusted information, misalignment between technology and needs, and pulls on limited user time. Because LA tools often lack integration and are perceived as being inaccurate and untrustworthy, they are often viewed as ancillary support mechanisms, rather than systems that could provide any comprehensive or predictive value for faculty, academic advisors or students. This view is compounded by LA tools that exist as closed systems or that lack the flexibility needed by users. Given their positions, workloads, and time constraints, study participants stated that they needed to understand how LA tools could make their work better by improving efficiency and efficacy. While many participants noted that no one tool can be everything to everyone (although that is what users desire) there was still a frustration at the limitation of LA tools that are perceived to be developed in a vacuum (e.g. without organizational or technological integration capability and without understanding of user needs). Faculty were particularly resistant to using LA tools that lacked efficacy and personalization, likely due to the ways in which they have been professionally developed and socialized into their pedagogical approaches and the time necessary to implement depersonalized or ineffective technologies (Austin 2011; Zellweger Moser 2007a, b). Future work should focus on developing tools that can easily be integrated into both organizational systems and individual practices and better aligned with professional needs.

First impressions matter

This study underscores how user perceptions of LA tools can impact decisions to adopt. Similarly, work done by Venkatesh et al. (2003) and Davis (1989) underscore that adoption is dependent upon the perception by individuals that technologies will improve their workload, via ease of use and usefulness to their work. When these perceptions are violated, through data inaccuracies or lack of integration, attrition rates can rise. The UTAUT model explains how adoption can be negatively impacted, when users perceive organizational and technical support infrastructures as non-optimal (Venkatesh et al. 2003). Within the FETAC, once faculty begin to doubt the trustworthiness of the LA course design (through negative experiences), an evaluation point is established that effects future use and willingness to commit time to LA continued tool adoption.

Among the most noted evaluation points for study participants was their reaction to data visualization and to predictive and prior data available through LA tools. Important in this study is an understanding that for many faculty and academic advisors, use of LA tools made work more cumbersome, not less, because they had to spend time making meaning of confusing and erroneous visualizations. There was an implication that data visualizations in student-facing versions of LA tools that are meant to help students progress could instead, if not carefully displayed and interpreted, be harmful to student advancement. Moreover, concerns about bias, surveillance, and the risk of using prior and predictive LA data acted to limit participants trust in that data and their decisions to use LA tools in their practice. As Ali et al. (2012) and Dawson et al. (2008) have noted, users are more likely to question data (and ultimately the LA tool, itself) if dashboard information is ill-presented, confusing, lacks transparency and integration, or does not speak to user’s needs or personal interpretations. This perspective holds within the FETAC, which notes that once users begin to question the technological infrastructure or design, they begin to reflect negatively on their experience using these technologies, which hampers further adoption (Zellweger Moser 2007a,b).

The influence of data visualization and the impacts of predictive and prior data on LA tool adoption are important findings from this study and have broad implications. The current dearth of data related to the impact of LA tool visualizations and associated interventions on users provides an opportunity for future research. Future studies should examine the impact of data visualizations as they relate to adoption and use by faculty and advisors, in addition to student-focused studies. Further, we need to understand if and how these tools—their visualizations, data, and associated interventions—affect student actions, behaviors and decisions and whether they act as a motivator or inhibitor to student learning, retention and success. This is especially important when considering predictive and prior data, as the stakes for misinterpretation of data that predicts future success or failure can be high. For those working in higher education, this finding speaks to the need for training and professional development to help users understand the data they receive through LA tools, the potential implications of that data, and best practices and ethical guidelines for its use.

Leveraging barriers through inclusion

Finally, due to the barriers reported in this study, future development of these tools should take into account user concerns and consider them, not just as barriers, but also as potential incentives that can be leveraged to improve tool efficacy and consequent user adoption. As Aguilar et al. (2014) found, problems can arise when users are not included in the design process. A major implication from this study, confirming similar work by Knight et al. (2016, 2018) and Klein et al. (2019) is the need for LA tool developers to engage potential users (i.e., faculty, staff, and students) in design and development of these tools based on professional experiences and practical uses. For higher education administrators, this inclusion should be made a priority in their organizations during LA tool purchase, implementation, and adoption.

One way to achieve this inclusion is through the establishment of a LA committee or council, inclusive of a variety of campus stakeholders, who can help advise administrators in the incorporation of LA into academic work. By including users from design through adoption, LA tools are more likely to be better integrated with other systems and better aligned with users’ needs. This finding extends our understanding of user adoption of LA tools by illuminating the importance of an inclusive design and integration processes as a means to leverage better adoption of these tools. If LA tools are to be successfully integrated into professional ways of being, developers must consider the needs of its users, not just to design more flexible systems or to create clearer visualizations, but to establish practical and useful tools, based on the needs and desires of the users, themselves.

Conclusion

LA tools hold enormous potential to improve student learning and outcomes in higher education. However, this potential is dependent upon use by the faculty and advisors who are teaching and advising students through their tenure at a university or college campus. Our findings indicated that while faculty and advisors are aware of and use LA tools, they do not do so without frustration and concern. These feelings are tied to the technological barriers that exist within these tools and within the technological infrastructures that exist on their campus and have led to uneven use, mistrust, and alternative methods of practice. As stated in the discussion, future work should focus on improving the integration of LA tools with other campus technologies and individual user needs and practices; on creating clear, accurate, and relevant data visualizations; and be inclusive of the professional needs and perspectives of faculty and advisor users.