Introduction

Over the past two decades at least, there has been rapid growth in the use of computer and Internet technology for pedagogical purposes in Higher Education institutions around the world. Park et al. (2008) note that “one unique and important aspect of Higher Education settings is that top university management in many institutions asks instructors to use an institution-wide system regardless of the rank and file’s desire and motivation to adopt the system” (p. 169). As this comment suggests, there is variance among faculty in the extent to which they welcome such systems and implement them in their teaching.

How can variance in attitudes and practice be explained? One explanation revolves around the construct of ‘self-efficacy’ (Bandura 1977). Essentially, individuals’ beliefs about their competence or mastery in a particular domain affect their beliefs about whether their behavior will lead to a successful outcome. Those faculty members who have high levels of self-efficacy with respect to the technologies in question may be more likely to accept their use in practice.

Self-efficacy features in some of the models put forward in the (extensive) literature on technology acceptance. Within this literature, the most influential theoretical formulation is probably the technology acceptance model (TAM; Davis 1989). The TAM, in either its original or modified forms, is a popular framework for understanding the extent to which individuals choose to engage with various forms of technology. Drawing on the theory of reasoned action (Ajzen & Fishbein 1980) it takes the form of a framework for predicting the extent to which users will adopt a new technology (for example in this context, a new method of delivering online educational content).

According to Davis (1993) there are two key variables that influence intention to make use of a technology: its perceived usefulness and its perceived ease of use. Perceived ease of use can be seen as related to self-efficacy: individuals higher in self-efficacy with respect to a particular technology are likely to perceive it as easier to use. Behavioral intentions then in turn influence actual system use. For example, Yi and Hwang (2003) showed that behavioral intentions were correlated with actual logged use (access frequency) of a virtual learning environment by students.

While the original TAM formulation has been widely used, a number of extensions to the basic model have since been developed. What these have in common is that they tend to extend the scope of TAM by adding other variables. One area of particular practical interest is the translation of attitudes and behavioural intentions to actual actions. What predicts whether people will actually use technology in practice?

Clearly, in addition to psychological variables such as Internet self-efficacy, there may be other factors—facilitating and inhibiting conditions—that will mediate or moderate the intention-behavior relationship. This notion of facilitating or inhibiting conditions is incorporated in the unified theory of acceptance and use of technology (UTAUT) of Venkatesh et al. (2003), with their ‘facilitating conditions’ construct. Facilitating conditions are argued to have a direct influence on use behavior, bypassing the behavioral intention step.

An alternative model for explaining technology acceptance, that also has its conceptual roots in the theory of reasoned action, is the decomposed theory of planned behavior (Taylor and Todd 1995). Ajjan and Hartshorne (2008) used this theory in a study examining intention to adopt Web 2.0 technologies among higher education faculty. Within the decomposed theory of planned behavior, perceived behavioral control is seen as a factor influencing behavioural intention, which then leads to actual behaviour. Perceived behavioral control is decomposed into two factors: self efficacy, and facilitating conditions in terms of resource and technology availability. While facilitating conditions are present in both this model and the UTAUT, their role differs. In the UTAUT their effect on behaviour is direct, while in this model they are mediated by perceived behavioral control.

Thus, while the TAM has been popular, both revisions and conceptually-related alternative models have been proposed. However, this entire family of models has been criticized on a number of grounds. For instance, Bagozzi (2007) contrasts the parsimony of TAM with the complexity of UTAUT and finds both lacking. While the motivation of Venkatesh et al. (2003) in developing UTAUT was to provide a unified framework and resolve the situation where researchers must pick and choose between competing (yet plausible) models and constructs, UTAUT has not supplanted these other models which are still used today. Thus, it appears the field has not yet reached consensus on a definitive and comprehensive model of the factors influencing technology adoption. The current study sought to contribute to this debate.

Our primary research question was whether self-efficacy was associated with faculty use of learning technology. Given that self-efficacy is most usefully considered in terms of a specific sphere of ability, rather than as a more general unfocused construct, we operationalized it in terms of Internet self-efficacy (Eastin and LaRose 2000) which reflects confidence in the use of online technologies. Internet self-efficacy could be conceptualized either as a component of perceived behavioral control in the decomposed theory of planned behavior, or as an index of perceived ease of use (for online technologies in general) in the traditional TAM formulation. In either case, one would predict a positive association between Internet self-efficacy and technology use.

A second research question was whether clearly identifiable barriers are associated with technology uptake among academic faculty. Within the decomposed theory of planned behavior, facilitating conditions (or the lack thereof) may be considered as an element of perceived behavioral control alongside Internet self-efficacy, while earlier models such as the TAM do not explicitly consider them. Identifying such barriers, and assessing their impact on technology uptake, may inform both theory and recommendations for practice within higher education settings.

Methods

This study comprised an online survey of academic faculty employed at a large University in London, England. Technology-enhanced and blended learning is given a high priority at the institution, and all courses have at least a minimal presence on the virtual learning environment (Blackboard) used there. In many cases the material provided via the virtual learning environment goes far beyond a minimal presence, but there is considerable variance in the extent to which instructors integrate it into an overall learning and teaching strategy.

Participants

Participants were recruited in a number of ways. Pedagogical leaders across the University were asked to publicize the survey to colleagues; it was mentioned in faculty newsletters; and a recruitment email was sent to all faculty registered as instructors on Blackboard. One hundred and thirty-nine data submissions were received. In 21 cases, the respondent had not indicated consent for their data to be used in analyses (this was asked both at the start and the end of the survey). These 21 were thus excluded from the sample. To maximize data quality the datafile was examined for implausible patterns of responding (e.g., an obvious mismatch between age and educational qualifications). This analysis did not indicate any problematic responses. Multiple submissions were controlled for using the survey platform’s proprietary technology, and furthermore checked by examination of the datafile for obvious duplicates. No evidence of multiple submissions was found.

Among the 118 individuals remaining in the sample, 114 reported being academic faculty, with the other four comprising two academic support staff, one manager, and one not answering the question. The analyses reported in this paper are restricted to the 114 academic faculty. Of these 114, 50 (43.9 %) were men and 64 (56.1 %) were women. The mean age of the 109 who reported it was 47.9 years (SD = 10.2). All but one had access to an internet-connected device (e.g., computer) outside work, and they reported spending an average of 23.77 h online each week (SD = 13.2). Some participants omitted to answer some of the questions. Therefore, N varies for the different analyses reported below depending on the level of missing data for the variable in question.

Materials and measures

Internet self-efficacy was measured using Eastin and LaRose’s (2000) Internet self-efficacy scale. This 8-item measure asks respondents to indicate the extent to which they feel confident performing various Internet-related activities (e.g., trouble-shooting Internet problems; turning to an on-line discussion group when help is needed). It has good internal consistency: in the present sample Cronbach’s alpha was 0.93.

Current use of technology enhanced learning was measured with a list of 18 different tools and techniques (Table 1). These were broken down by different applications of those techniques in some cases. They do not comprise an exhaustive list of all possible learning technologies, but were all tools and techniques known to be used within the host institution. The list comprised those tools the research team were aware of from their own practice or that of colleagues, and was supplemented by information from senior learning technologists within the institution about other techniques they knew were being used. Respondents were also asked to indicate any other type of technology enhanced learning tool/technique they were using in their practice that was not already listed. For each tool and application, participants indicated whether they (a) currently used that technique; (b) had considered using it; (c) had used it in the past or (d) none of these. A summary index of current technology enhanced learning use was created by counting the number of different tools respondents reported currently using (possible range 0-18).

Table 1 Percentage of sample (N = 114) currently using each tool, having considered using it, and having used it in the past

Perceived barriers to adoption of technology enhanced learning were addressed with a series of 15 items asking about respondents’ experiences and perceptions of the use of technology enhanced learning techniques in their own teaching. They were asked to respond to these items on a 5-point scale (anchored at ‘strongly agree’ and ‘strongly disagree’). The items were intended to probe perceived barriers to adoption, such as “Technology-enhanced learning methods are not suited to my subject”. They were generated on the basis of previous research on barriers to adoption of educational simulations and games by academics (e.g., Lean et al. 2006) and the experiences of the research team and their colleagues. Six of the items were drawn directly from Lean et al. (2006), and a further four were adapted from that source but re-worded to suit the current project. The remaining five were generated by the current researchers on the basis of experience and informal feedback from colleagues about barriers to their use of learning technology. The full list of items is shown in Table 2.

Table 2 Barriers to adoption of learning technologies, with Varimax rotated component loadings

Participants also completed a five-item measure of the extent to which they saw their ‘real self’ as reflected in online interactions (the Real Me scale; McKenna et al. 2002) and a number of other items related to use of technology specific to the host University. Data from these items were not included in the present analyses.

Procedure

The study was completed completely online. Participants followed a URL presented in their recruitment email or in one of the other recruitment routes, then saw a page with information about the study. On indicating informed consent by clicking a button, they were forwarded to the main questionnaire. The first page comprised demographic items and average hours of Internet use per week. The initial page was followed by the Internet self-efficacy scale, the Real Me scale (not included in the current analysis), then all the items related to use of online teaching tools. Finally, participants were given the opportunity to enter a prize draw as recognition for their contribution, and asked once again to confirm informed consent. The final page presented debriefing information about the study.

Data analysis

Data analysis was performed using IBM SPSS Statistics Version 19. Following the data screening outlined above and calculation of descriptive statistics, a principal components analysis was performed to identify groupings among the 15 potential barriers to adoption rated by respondents. Components were selected on the basis of scree plot and parallel analysis, followed by Varimax rotation. Scores on the components were then calculated using the regression method, to create indices that could be used in further analysis. Both the first research question (whether Internet self-efficacy was associated with technology use) and the second (whether the identified barriers to adoption were associated with technology use) were then tested simultaneously using standard multiple linear regression.

Results

Participants reported using a wide range of tools. For the list of 18 techniques on the survey, respondents indicated whether they had used them or not (Table 1). The number used ranged from 0 (17.8 %) to 11 (2.5 %). The largest number of participants indicated they used 2 techniques (21.2 %). Thus, the sample appears to incorporate both heavy and non-users of the online learning tools we asked about.

The structure of the ‘barriers to adoption’ data was examined using principal components analysis. Preliminary analyses indicated that five principal components had eigenvalues over 1.0. However, examination of the scree plot (Fig. 1) suggested that a solution with fewer components was more appropriate.

Fig. 1
figure 1

Scree plot from principal components analysis

The scree plot suggests a solution with two or possibly three components, but making such a judgement involves a degree of subjectivity. Accordingly, we conducted a parallel analysis (Horn 1965) using the procedures and method outlined by Patil et al. (2008) where one compares “the 95th percentile eigenvalues of several random correlation matrices with the corresponding eigenvalue from the researcher’s dataset” (p. 164). The parallel analysis indicated that the first two components had eigenvalues that very clearly exceeded the criterion for retention when compared to the 95th percentiles for a sample of 100 randomly generated correlation matrices. The third extracted component only just met the criterion, with an observed eigenvalue magnitude of 1.492 compared to 1.491 for the randomly generated data.

Based on both the scree plot and the parallel analysis, the choice seemed to be between two- and three-component solutions. Both of these solutions, Varimax rotated to simple structure, are shown in Table 2. The patterns of loadings indicate that the two-component model provides the clearest and most parsimonious description of the data. In the three-component model, some items have substantive loadings on multiple components (component three, marked by only five items, is a particularly affected by this). Furthermore, the groupings of barriers in the two-component model are easily interpretable in a theoretically meaningful way. Accordingly, the analysis that follows is based on extraction of two principal components, which jointly accounted for 42 % of variance in the dataset, followed by Varimax rotation.

Component 1 accounted for 27.8 % of the variance. It is marked by items such as “There is limited availability of University resources to allow the use of technology-enhanced learning methods”, “There is limited availability of School resources to allow the use of technology-enhanced learning methods” and “There is limited support available (e.g., technical and/or admin.) for new methods”. It appears to reflect perceptions of structural constraints in the academic environment that prevent development or deployment of online learning techniques. Essentially, these are factors inhibiting technology use, so could be viewed as the inverse of the Facilitating Conditions construct found in some models. For the group of 7 items with their primary loading on the component, Cronbach’s Alpha was 0.79.

Component 2 accounted for 14.3 % of the variance. It is marked by items such as “Students won’t react well to these methods”, “Technology-enhanced learning methods are not suited to my subject” and “I feel that using new methods is risky”. These items, along with others that load strongly on this component (see Table 2), appear to reflect respondents’ attitudes towards how useful or usable e-learning approaches would be for their area of teaching. This component appears to encapsulate much of the meaning of the TAM model’s perceived usefulness variable, though negatively valenced. For the group of 8 items with primary loadings on the component, Cronbach’s Alpha was 0.71.

Participants’ scores on the two components were calculated in SPSS using the regression method. These component scores were then used in examination of the predictors of technology uptake.

A multiple linear regression, with simultaneous entry of all terms, was performed to examine the effects of Internet self-efficacy, Component 1 (structural constraints) and Component 2 (low perceived usefulness) on the number of online learning tools currently used by each participant. The overall model was significant, (F (3,94) = 15.09, p < .0005, R 2 = .33), with all three variables being significant predictors of technology use (Table 3). Internet self-efficacy was associated with higher levels of technology use, while both Components 1 and 2 were associated with lower use.

Table 3 Multiple regression examining predictors of technology use

Discussion

The current findings indicate that Internet self-efficacy is positively associated with use of learning technology by academic faculty. Conversely, low perceived usefulness and inhibiting conditions were associated with lower reported use. These findings suggest that when trying to understand faculty use of learning technologies, both individual and contextual factors need to be taken into account.

In terms of individual factors, faculty members high in Internet self-efficacy reported use of more learning technologies than did those lower in Internet self-efficacy. This result is consistent with work (e.g., Hsu and Chiu 2004) indicating that higher Internet self-efficacy was associated with higher intentions to use, and actual use of, online services. The current findings extend such work and complement those of Ajjan and Hartshorne (2008) by demonstrating that Internet self-efficacy is associated with self-reported actual use of learning technologies by higher education faculty.

In terms of contextual factors, the implications for theories of technology acceptance bear consideration. This study does not provide the basis for any definitive comparison between competing models of technology acceptance. It was not designed to do this, and does not provide data relating to many (indeed most) of the constructs specified in models such as TAM, UTAUT and others. However, it does occupy the same conceptual space and provides some information about certain characteristics that a successful model should incorporate. Given that Component 1 (structural constraints) was found to be important as well as Internet self-efficacy and Component 2 (low perceived usability), the classic TAM formulation is seen to be lacking because it only incorporates the latter two of these (where Internet self-efficacy is considered as a proxy for perceived ease of use).

Both UTAUT and the decomposed theory of planned behavior incorporate constructs analogous to all three, so would seem to be preferable to the original TAM in that respect. Further development of models of technology acceptance should take this into account: Whichever model ultimately wins out, it must incorporate recognition of facilitating or inhibiting conditions. Our Component 1 is conceptually the inverse of facilitating conditions. A useful focus of future research would be to examine whether the effect of Component 1 on behaviour is direct, as UTAUT would predict, or mediated by perceived behavioral control and behavioral intention, as the decomposed theory of planned behavior would predict.

As well as theoretical implications, the current findings provide a basis for practical recommendations to Higher Education institutions. First, Internet self-efficacy is significantly related to technology adoption among faculty. There are, of course, questions of causality here. Higher Internet self-efficacy could arise from greater use of tools rather than vice versa—a suggestion consistent with the finding (Torkzadeh and Van Dyke 2002) that engagement with technology (for example in a training course) can serve to increase Internet self-efficacy levels. However, the existence of the relationship does suggest that raising Internet self-efficacy by training academic faculty could facilitate uptake of technologies by increasing perceived ease of use or perceived behavioral control.

Second, structural factors within the institution (Component 1) must also be acknowledged. Many of the items associated with lower technology use reflect these institutional/infrastructure issues. The implication is that if a University wishes to increase use of learning technologies, it is not enough to train and encourage faculty: adequate investments must be made in technical infrastructure and support for those activities.

In conclusion, Internet self-efficacy, structural factors, and perceived usefulness were all associated with the uptake of learning technologies among higher education faculty in one institution at least. The fact that structural constraints were found to play an important role indicates that models of technology acceptance should include this variable, and furthermore has implications for policy in educational institutions.