Introduction

Amidst growing concern about a ‘learning crisis’ in public schools in the developing world (World Bank 2018), countries such as India are turning to Information and Communication Technology (ICT)-led interventions in schools in the belief that technology by itself can improve learning levels (Negroponte et al. 2006). Informed by this “technocentric thinking” (Papert 1993), policy and practice have tended to focus on introducing ICT in schools (Trucano 2012, 2015) and assessing its impact on learning outcomes. But since any technology is part of a complex web of interactions among pedagogical, cultural and institutional practices, it is often difficult to identify and control the factors that influence the efficacy of ICT. Hence, it is no surprise that assessments of the impact of ICT on outcomes have shown “mixed evidence with a pattern of null results” (Bulman and Fairlie 2016)—positive results in some cases (Muralidharan et al. 2019; Naik et al. 2016) and no or negative effect in others (Fuchs and Woessmann 2004; Peña-López 2015). The realization that technology is enmeshed in human processes has led to the development of ‘Technology Integration’ as a framework to examine how technology might be used to improve learning (Liu et al. 2017). The pedagogical beliefs of teachers, who are ultimately responsible for using the technology to improve learning in their classrooms, are key to this process of integration (Tondeur et al. 2017). In this paper, we draw on ‘Technology Integration’ and the role of teacher beliefs in this process to assess a smart-class initiative that was introduced in 1609 public schools in one province in India in September 2017. We further illustrate how a program designed to improve learning outcomes can fail to meet its objectives in the absence of a clear commitment to Technology Integration through changing teacher beliefs and a clear understanding of how the assumptions made by those in charge of designing an ICT intervention interact with these beliefs.

Technology integration and teacher beliefs

The realization that technology can impact learning positively if it is part of cultural change in the system in which it is embedded, is not new (Papert 1980, 1993; Selwyn 2011). The role of the key human actor—the teacher—in making technology part of a new teaching–learning culture has, therefore, attracted scholarly attention for quite some time (Lowther et al. 2008). Much of this attention, drawing particularly on Ertmer (1999, 2005), has focused on identifying and overcoming the first and second order barriers that teachers are likely to face as they weave technology into cultural change. Hew and Brush (2007), in their review, identified four first-order barriers that are external to the teacher (resources, institutions, subject culture, and assessment) and two second-order barriers (teacher attitudes and beliefs, and knowledge and skills). The first-order barriers, though formidable in certain contexts (O’Mahony 2003; Pelgrum 2001), were seen as less significant than the second-order barriers (Dexter et al. 2002; Ertmer 1999; Newhouse 2001; Zhao et al. 2002; Judson 2006).

The focus on the second-order barriers has led to the exploration of a number of dimensions associated with teacher attitudes, beliefs, knowledge and skills. Teacher-student relationships, self-efficacy beliefs of teachers and students, and teachers’ technological-pedagogical-content knowledge and beliefs have been shown to have a mediating role on the technology-learning link (Ponticell 2003; Haney et al. 2002; Wozney et al. 2006; Buabeng-Andoh 2012; Taimalu and Luik 2019). The positive role of constructivist beliefs of teachers (Judson 2006), and teacher-directed use of ICT by students (Miranda and Russell 2012), have also been studied. Tondeur et al. (2017) summarize the key role that teacher beliefs play in effective technology use: “Ultimately, teachers’ personal pedagogical beliefs play a key role in their pedagogical decisions regarding whether and how to integrate technology within their classroom practices” (p.556). They show that the relationship is bi-directional. On the one hand, technology-rich learning experiences can change teachers’ beliefs towards student-centered beliefs. On the other, teachers with such beliefs are more likely to use technology for student-centered learning. In both cases, however, the relationship is affected by perceived barriers or beliefs, and needs sustained professional development to develop.

The technology integration literature also indicates that teacher attitudes, beliefs, knowledge and skills have to be seen in relation to the contexts in which teachers work. Thus, the need to take teachers’ perspectives into account while implementing technology in classrooms (Muller et al. 2008), involve teachers in decision making about technology in the classroom (de Koster et al. 2017), and address institutional complexities that affect teachers (Miglani and Burch 2019) has also been noted. Liu et al. (2017) sort these teacher-related and context-related factors into three clusters: teacher characteristics, school characteristics, and contextual characteristics; taken together, these influence technology integration, which Liu and colleagues operationalize as the use of technology to support a “variety of instructional methods” (p. 798) in the classroom, as measured by a self-report on the frequency of the use of technology to support instruction. The teacher characteristics they consider include teaching experience with technology, level of education, teaching experience and gender. But the influence of the three clusters on technology integration is mediated by two other teacher-related factors: teacher confidence and comfort in using technology, and teacher use of technology outside the classroom. Thus, teacher-related factors play an important role in the effective use of technology for learning.

However, in spite of this recognition of the role of teacher beliefs and other teacher-related factors in technology integration, successful implementation of a technology-enabled classroom is still a complex issue. Teachers’ stated beliefs have not always predicted the use of technology in practice. Han et al. (2018), found that South Korean teachers, who held constructivist pedagogy beliefs that were similar to those of teachers in the United States, were unable to convert their beliefs into technology-enabled learning practices. In a case study of an immersive virtual classroom environment, Mills et al. (2019) found that although training changed teacher beliefs, it did not change classroom practice. Scherer and Teo (2019) found a significant positive relationship between perceived ease of use and behavioral intention to use technology, but others report that teachers who may value technology in their personal lives and employ it usefully there, are unable to integrate technology in their classroom practice effectively (Marwan and Sweeney 2019; Nath 2019). Ursavaş et al. (2019) called for a renewed focus on subjective norms (“an individual’s perceptions regarding the approval or disapproval of important others of a target behaviour”, p. 2503) to better understand intention to use technology and the conversion of beliefs into practice, especially among pre-service teachers. Hosek and Handsfield (2019) have shown how school-level imperatives, in this case policies related to critical digital literacy, led to a “disconnect between their [teachers’] theoretical and pedagogical beliefs and their actual classroom decisions regarding student participation in their digital classrooms” (p. 10). These studies show that though teacher beliefs may be central to the process of facilitating or hindering technology integration, their interaction with a number of contextual factors still needs careful study through an examination of the classroom practices of teachers and students (Matos et al. 2019) or of teacher decision-making processes (Kopcha et al. 2020). An interaction that seems to be missing in the literature discussed here is that between the beliefs of teachers and the assumptions of content developers that are inferred by the teachers as they use the material that is supplied to them. It is on this interaction, in a public system that had mandated a smart-class intervention, that we focus in this paper.

The “technocentric thinking” that informs educational public policy assumes an uncritical faith in the ability of technology to improve learning outcomes. However, given the mixed evidence about the impact of technological interventions, it is reasonable to assume that any ICT-led intervention has to be carefully assessed for its impact on learning, before the processes that influence the relationship between the intervention and the outcomes are explored. Using this reasoning, the importance attached to teacher-related factors by Liu et al. (2017), and more specifically the focus on teacher beliefs about teaching and learning highlighted by Tondeur et al. (2017), we derive the following framework to study technology integration and the role of teacher beliefs in the Smart-Class Initiative (hereafter referred to as SCI) under study (Fig. 1).

Fig. 1
figure 1

Framework for the study of SCI

Research questions

SCI focused mainly on Mathematics and Science, and hence we use learning in these two subjects to assess academic outcomes—SCI also had content on the local language. ICT is also expected to influence non-cognitive competencies in children; specifically, attitudes to subjects being taught and self-efficacy beliefs have been found to be strong predictors of academic success (Nicolaidou and Philippou 2003; Li 2012; Uitto 2014). Students’ prior achievement and teacher technology self-efficacy (Laver et al. 2012) can influence student outcomes. In addition, student factors—caste, gender and parental education and occupation (Kingdon 2002; Pritchett 2013), the availability of educational reading material at home (Marjoribanks 1996), and attending paid private tuition (Dongre and Tewary 2015) can influence academic performance. Therefore, we formulate our first question:

  1. 1.

    How do children who have been exposed to SCI classrooms for one year compare with children in non-SCI classrooms in terms of academic performance in Math and Science, and certain non-cognitive competencies such as attitude to Math and Science, and Math and Science self-efficacy beliefs, after controlling for prior academic achievement, student factors, availability of reading material at home, and private tuitions?

    Answering this question, derived from the stated purpose of the project—namely improving academic outcomes through digital classrooms, should generate an objective assessment of SCI outcomes. A study of how teacher beliefs and practices interact with the introduction and implementation of the technology should then help explain this objective assessment of outcomes.

  2. 2.

    How does the process of technology integration interact with teacher beliefs about teaching and learning, and how can this interaction explain the objective knowledge generated by the assessment of SCI outcomes?

Method

SCI was launched in September 2017 in 3173 classrooms of Grades 7 and 8 (age group 13–14) in 1609 schools. The hardware supplied was one projector, one laptop, one infrared camera, one speaker, one stylus pen, one laser pointer and one whiteboard (which acted as a smartboard) to each classroom, and a common wireless router for both classrooms. The e-content for the program (Fig. 2) was prepared by a private company, using the textbook as a base, in collaboration with officials of the government. The content was certified for use by the government agency in charge of curriculum development. Teachers were trained over 2 days in the use of the package; a Technical Support Person of the hardware vendor was deployed in each school for the first three months.

Fig. 2
figure 2

Organization of content

Phase-1: comparative assessment of cognitive and non-cognitive outcomes

The assessment of learning outcomes and non-academic outcomes was carried out with two groups of children, one group of students studying in SCI-enabled schools and the second studying in non-SCI schools. The sample size was determined using Optimal Design Plus Empirical Evidence Software (Raudenbush et al. 2011). Assuming school level factors explained 25% variance (i.e. ICC = .25) (Spybrook et al. 2011) and testing of five students per school, we estimated a requirement of 306 schools in order to have a minimum detectable effect size of .2 (significance level = .05, power set at .8). The schools for the study were randomly selected from the list of 5112 schools that had applied for SCI when it was announced in mid-2017. Thus, the treatment schools were schools that had been granted SCI, whereas the control schools were those that intended to adopt SCI but were not provided the program. A total of 310 schools (155 in each group) were selected. Students from Grade-8 were evaluated as part of the study. The evaluation instruments consisting of subject test and survey of attitudes and self-beliefs were administered after exactly one year of installation of SCI in treatment schools, in September 2018. The administration of test and surveys was supervised by a test supervisor and all data were collected online. If there were less than five students in Grade-8, the school was dropped, if there were 5 to 10 students all had to take the test, and if there were more than 10 students, 10 students were selected at random by the test supervisor.

Subject tests

The student test was developed in collaboration with the Education Department of the province, and had 15 questions in mathematics, 15 in science and five in the basics of computers—a module taught within mathematics to all children, with 30% of the questions assessing application of knowledge. A question bank of 50 questions for mathematics, 50 for science and 20 questions for computers was prepared. The questions were selected randomly from this bank. In the schools, the test was supervised by the test supervisor; the teacher was not present in the testing room.

Attitude and self-efficacy surveys

The questionnaires to measure student’s attitude and self-efficacy beliefs towards the subject were adapted from standard scales. The 15-item attitude to mathematics scale by Mattila (2005) (Metsämuuronen 2009; as cited in Metsämuuronen 2012) was adapted for both mathematics and science. It measured students liking of the subject (5 items), self-concept in the subject (5 items) and perceived utility of the subject (5 items). Self-efficacy beliefs towards the subject were measured using the 8-item scale from Motivated Strategies for Learning Questionnaire by Pintrich et al. (1991). To measure students’ non-cognitive competencies towards technology a 9-item survey was administered. The instrument consisted of a 5-item scale adapted from Liou and Kuo (2014) which measures students’ motivation and self-regulation towards learning technology, and a 4-item scale measuring student’s self-efficacy beliefs about using technology from Venkatesh et al. (2003) and Gu et al. (2013). All survey items were translated into the regional language and translation validity confirmed by back-translating the questionnaire to English. Responses to students’ self-efficacy beliefs towards technology were recorded on a 6-point Likert scale whereas all others were on a 5-point scale.

Scores in two State Academic Tests conducted by the government in January 2017 and April 2018 (SAT1 and SAT2, with reading, writing and numeral ability scores and subject scores in science and math) were also collected to have a measure of pre-SCI academic achievement; SAT1 was therefore taken when both groups of children were in Grade-6, and SAT2 when they were in Grade-7, but with the SCI children having had exposure to SCI for around seven months. The test administered under the present study in September 2018 provided a third measure, when the children were in Grade-8.

Phase-2: interpretive understanding of teacher beliefs and technology practice

One-hundred-and-seventy teachers teaching in SCI classrooms filled out a semi-structured survey form that asked about their pre-SCI practices and their current practices. This was pretested in eight schools before finalization. In addition, pilot case studies, spread over 6 days, were conducted in two SCI schools, to develop a case study protocol covering teacher beliefs and use of SCI, student practices, and home-school interactions. After the data of Phase-1 had been analyzed and the schools ranked by performance, two schools were selected at random from the top ten, and another two from the bottom ten. These four schools were the sites for in-depth observations and teacher interviews. After studying the individual cases, we looked for contrasts between the two sets of schools, examining in particular the pedagogical practices associated with SCI, methods of assessment and the pacing of the lessons. However, the practices turned out to be very similar across the four schools. Hence, in the following discussion we treat all four case studies as a set; the four schools are denoted as [S1], [S2], [S3], and [S4]. Data collection was mainly through classroom observations, and individual and group interviews of teachers. In addition, all four administrators in charge of SCI and the SCI content development core team of three members were interviewed. The analytic approach drew on coding and thematic analytic procedures recommended for observational and interview data (Elo and Kyngas 2007; Ryan and Bernard 2003; Bazeley 2013). We used the broad categories identified in Tondeur et al. 2017, more specifically, ‘Technology enabling beliefs, and beliefs enabling technology integration’, ‘Beliefs as perceived barriers, and perceived barriers related to beliefs and technology use’, and ‘Alignment between beliefs and practices’, to guide the development of the themes discussed below.

Analysis and findings

Phase-1: comparative assessment of cognitive and non-cognitive outcomes in SCI and non-SCI classrooms

Overall, 2574 students from 310 schools, 1314 students from 155 SCI schools and 1260 students from 155 non-SCI schools, responded to the survey and test. Two students belonging to two different SCI schools did not fill the survey but took the tests. Table 1 provides a summary of the respondents’ demographic information. Many of the children’s parents (70%) have not studied beyond Grade-8; the fathers of most students worked in agriculture. Very few students reported that they attended paid tuitions. Less than a third of the respondents indicated that their parents purchased additional reading materials for them. The SAT1 and SAT2 scores were available for 2364 of the 2574 students (1206 in SCI-schools and 1158 in non-SCI schools).

Table1 Summary of demographic data

We conducted a two-level multivariate analysis to determine the effect of SCI on student knowledge, attitude and self-efficacy beliefs while controlling for individual student background and school level influences. This accounts for the effects of individual student-level characteristics (gender, caste, parental background, etc.) and school-level factors (teacher, school facilities, school management, etc.) on student outcomes. Data analysis was performed in Mplus 8.3 (Muthén and Muthén 2017) using maximum likelihood estimator with robust standard errors (MLR). Confirmatory factor analysis (CFA) was performed for measures of ‘attitude to subject’ and ‘subject self-efficacy beliefs’. CFA indicated that the responses to the subscales of attitude to subject, liking of the subject and self-concept in the subject were highly correlated with each other and self-efficacy beliefs towards the subjects (r > .9, p < .05 for both Science and Math). Due to high correlation, responses to the subscales on liking of the subject and self-concept in the subject were dropped from further analysis. We computed the reliabilities of the remaining scales using both Cronbach’s alpha (Geldhof et al. 2014) and Spearman Brown formula (Muthén 1991). Model fit was ascertained using the following criteria: root mean square error of approximation (RMSEA) less than.05, comparative fit index (CFI) greater than.95, Tucker-Lewis index (TLI) greater than.95, and standardized root-mean-square residual (SRMR) less than .08 (Hu and Bentler 1999). Table 2 presents the summary of student responses to the attitude and self-efficacy surveys.

Table 2 Responses to attitude and self-efficacy scales of mathematics

The interclass correlation coefficient (ICC), which is a measure of variation in student responses attributable to school level factors, for the survey items is sufficiently high requiring the use of multilevel analysis (Muthén 1991). Table 3 presents the summary of scores obtained by the students in the test and the annual state academic tests.

Table 3 Summary of student scores

A dummy variable indicating presence of SCI was modelled at the school level. Thus, a significant coefficient of the SCI dummy variable would indicate the effect of SCI on the students’ performance on subject tests or subject-specific attitude and self-efficacy belief surveys. The analysis was performed separately for each of the topics Science, Mathematics and Technology. All three models (Figs. 3, 4, 5) were a good fit as per criteria provided by Hu and Bentler (1999).

Fig. 3
figure 3

Science self-efficacy beliefs, attitude to science, and performance in science

Fig. 4
figure 4

Mathematics self-efficacy beliefs, attitude to math, and performance in math

Fig. 5
figure 5

Self-efficacy beliefs towards learning and using technology, and performance in ICT

The results of the analysis indicated no significant effect of SCI on student subject knowledge (Science: β =  − .005, p = .968 & Math: β =  − .087, p = .462), attitude towards subject (Science: β =  − .162, p = .211 & Math: β =  − .057, p = .653) and subject self-efficacy beliefs (Science: β =  − .167, p = .197 & Math: β =  − .053, p = .684). Analysis also indicated no significant effect of SCI on knowledge of ICT (β =  − .135, p = .263) and self-efficacy beliefs towards either learning (β =  − .018, p = .882) or using (β =  − .092, p = .458) technology. Thus, there was no significant difference in student cognitive and non-cognitive outcomes when comparing schools with SCI with those without SCI. But we did find some student-level factors that had significant positive influence on learning outcomes. Noteworthy among these were parental education (father educated beyond Grade-8) and availability of additional reading materials at home.

Phase-2: interpretive understanding of teacher beliefs and technology practice

We now turn to an interpretive understanding of the interaction between SCI and teacher beliefs and practices, to arrive at a few tentative explanations for the failure of SCI to leverage technology for quality improvement. The content that made up the ‘SCI package’ was developed by a private agency that used the official textbooks as a base. It was certified for use by the government agency in charge of curriculum development. The overall theme that emerges from the discussion below is that the beliefs of and the assumptions made by the designers seem to have played a crucial role in either supporting traditional beliefs of teachers or challenging them—the former leading to a reinforcement of the features and effects of a traditional classroom, but the latter leading to resistance that hinders learning processes. In the absence of supportive interventions to ensure technology integration, both responses have led to a reproduction of the traditional classroom, thus resulting in no significant differences in outcomes between SCI and non-SCI classrooms.

SCI reinforcing traditional beliefs about importance of ‘knowing content’

Public schooling in India has traditionally considered ‘knowing content’ more important that the process of learning (Kumar 1993); the textbook has been the key support for this belief among teachers and students (Kumar 1988). The structuring of the e-content in SCI seems to have reinforced this common belief, with the ‘textbook culture’ being replicated by the teachers through a construction of the traditional textbook as “lacking in resources to explain topics,” “time consuming,” “deficient in multiple representations of knowledge,” and SCI content as a superior alternative. Two factors have facilitated such a construction. First, the e-content, by failing to exploit the dynamic potential of technology and relying on converting text and printed visuals into electronic form (for example, through the extensive use of pdf documents), conveys the impression that it is, in the words of one teacher, a “more engaging text.” As one of the key content developers commented, “Teachers are used to textbooks; so we have followed the content in use, but we want to make things easy for them, and so give as much detail as possible.” The latter belief spills over into assumptions about the role and autonomy of the teacher, which we take up later.

Second, a key feature noted by most teachers of the “more engaging text” is the gamification of assessment that the developers have built in. Paradoxically, this has only led to a reinforcement of the primacy of knowing content over the process of learning. A good example is SCI’s quizzes that follow the so-called “KBCFootnote 1 format”. A typical observation from [S1], a class of 24 students, is described below. A student is asked to come up and navigate to KBC through the stylus on the smartboard. After KBC is opened, the teacher initiates what is a ‘participation round’, in which many children try their hand at answering one question each. One student gets up, answers a question and sits down without waiting for confirmation about the correctness of the answer. The process continues with students coming up one by one. The process is quite fast. Both the students and the teacher are eager to get to the right answer quickly. When the answer is wrong, the teacher says, “Find out the right answer.” Then, the ‘regular round’ starts. On average, a student gets about five questions right. One girl gets 10 questions correct, wins the quiz and the class claps. Around six students mark the wrong answer on their very first attempt; they go back to their seats without saying anything. This pattern is repeated in the other three case study schools, with minor variations. It is possible that this method of assessment would work well if the learning process is fairly robust and if students get feedback on why some of the answers they give are wrong. In the absence of these conditions, the gamification built into SCI reinforces the traditional neglect of the process of learning, instead focusing on the “what” of learning, to even, as we observed, the extent of memorizing the right answers that appeared on the screen. In other words, the technological features and a traditional belief in the primacy of ‘knowing content’ reinforced each other to reproduce the features of a traditional classroom.

SCI reinforcing beliefs about inequalities in learning

Given the inequalities in levels of learning among children within the same classroom (GCERT 2015), one of the stated objectives of SCI was to ensure that all children learned. In the SCI classroom, the teachers’ continued focus on getting the answers right, which the software reinforces, tends to respond better to the needs of the “cleverer children” [teachers in S1 and S3 and teacher survey]. “The weaker children do not take much interest. Giving individual attention to such children requires a lot more time than is available.” This is echoed by a teacher in [S4] who in addition blames the socio-economic background of some children for their poor learning and inability to capitalize on the possible benefits of SCI. These beliefs are no different from what one would expect in a typical government school—the consequence is that the disadvantages of a traditional classroom get reproduced in spite of a technology that ostensibly was meant to improve learning levels of all children, regardless of their socio-economic condition. The categorization of children as described here has another consequence, often found in traditional classrooms. The teachers continue to rely on what they call “overall assessment” of the class; the more active participation of the academically better children in operating the software or in the assessment exercises is equated with “class performance.” This may work against the interests of those who are academically weaker and unable to keep up with the pace of instruction.

The academic inequalities that characterize the classrooms would have demanded a degree of personalization of instruction, but in its absence, SCI reinforces the teachers’ belief that an overall assessment is sufficient. The key problem, which the teachers do acknowledge but seem unable to address, is that many students seem to be poorly equipped to deal with the demands made by the syllabus of the grades in which they are studying. Paradoxically, when the students are thus underprepared, the teachers tend to focus on the right answers and leave it to the students to figure out the process. Thus, inequalities in prior preparation carried into the classroom and a discourse that focuses on “knowing content” work against many children in both SCI and non-SCI classrooms.

SCI challenging ideas of teacher autonomy: teacher responses as accommodation

The interaction of SCI with teacher beliefs is nuanced; it is not that it always reinforces traditional teacher beliefs. In some cases, it challenges them. This effect, unintended as the interviews with officials revealed, provokes accommodative responses, as in the case of challenge to teacher authority and autonomy, or resistances that take the shape of conversational performances substituting for dialogue. We discuss the first response in this section. As noted earlier, SCI content has been interpreted as a “more engaging text.” The theory section is a repetition of the textbook, but the “engaging” part comes from the videos and animations that accompany the text, and the gamified assessments. In an effort to make things easy for the teacher, the software, through its animations, substitutes for what a teacher would do. Given the limitations of space, we describe just one vignette from a language class, a lesson on a poem. After the ‘theory’ portion, which the students read on the screen as the teacher moved around the class, the sections that followed were the poem set to music; “Explanation of the Poem”, during which the teacher often paused the video to ask for the meanings of a few words; and the poem but with a few blank spaces which the students were required to complete orally. As this was happening, the teacher joked with the class, “Are you reading properly? This is what you might be asked in the exam.” Finally, the teacher announced, “We will play the quiz.” This proceeded rapidly, with no discussion of the wrong answers.

A number of experiences similar to this have shaped a new understanding of teacher autonomy and pedagogical practice. As many teachers note in the survey, “Everything is there; we just have to play it.” This finding was explained by one teacher: “If the teacher has to give her own explanation, the audio explanation in SCI would become redundant and we might be seen as not using the resource; if we do not give the explanation, then we just have to implement the program.” Many teachers have reacted with an accommodative response that ascribes a dominant, almost messianic, role to the ‘SCI package’. The implementer role that the teachers adopt as a consequence is justified by the increased student engagement that is visible. Learning is then expected to follow from this higher engagement—an assumption that is more based in hope than classroom realities. This behavior is consistent with the beliefs of the SCI administrators interviewed: “we want systems that teachers can implement with ease” (emphasis added). Such beliefs, the assumptions of the content developers that technology had to “make everything simple for the teachers,” and the lack of training in the pedagogical use of the technology, serve to construct the teacher as just an implementer of a package. This has as yet poorly understood implications for the teacher as someone who has pedagogical autonomy and is expected to integrate technology into ongoing pedagogical practice.

SCI challenging teacher-centric beliefs: conversational performance substituting for learning dialogue

SCI, through its design, was challenging the teacher-centered belief about lecturing as a dominant method. But when teachers resist this, learning suffers. In a mathematics class [S3], while the teacher was engaged in manipulating different geometric shapes and tools from the smartboard, he was engaged in a stream of one-way conversations, punctuated by questions. In [S2], in a geometry class, the teacher began with a series of content-related questions which required yes or no answers. As he used the geometry toolbox of the smartboard, his questioning followed a pattern: A student who was asked a question had to stand up and answer; if the answer turned out to be wrong another student got a chance to answer, but the first student remained standing. This pattern continued till the right answer was given. These two episodes illustrate a common practice across the schools: an attempt to reinstate the belief about teacher-centered ‘talk’ as superior to other modes of delivery in response to the challenge posed by the demonstration and interactive possibilities of the technology. The consequence was that the pattern of interaction was no different from what one would observe in a traditional classroom—only the smartboard had replaced the blackboard. The conversational performance, however, created an image of an interactive classroom—students did respond with answers, though not necessarily the right answers. The focus remained on getting the content right. Once again, in spite of the interactive possibilities that could have led to better technology integration, beliefs about the ‘right’ pedagogy and its goals tend to reinforce the classroom as a place where genuine democratic conversation focusing on understanding is difficult.

Discussion and conclusion

This paper has examined a large smart-class initiative (SCI) in a public schooling system with a view to understanding the relationship among technology integration, teacher beliefs and the interpretation by teachers of the content that is made available to them. Consistent with studies that do not report a significant impact of the introduction of technology-led initiatives on student academic or non-cognitive outcomes, we did not find a significant difference between SCI classrooms and non-SCI classrooms. We then explored the reasons for this finding by studying how teacher pedagogical beliefs interacted with a new technology to reproduce traditional classroom processes and effects in an environment that was ostensibly technology-rich. The pedagogical beliefs that teachers hold are complex (Ertmer and Ottenbreit-Leftwich 2010), and are determined by the formative processes that teachers have gone through and the contexts in which they work. Such beliefs may broadly be seen as teacher-centered or student-centered (Tondeur et al. 2008). In the strongly hierarchical pedagogical system that obtains in many Indian public schools, practices tend to be teacher-centric and teachers tend to be “strict” (Tiwari 2015). Classrooms are characterized by a dominant role for the teachers, and negative teacher behaviors are often present (Tiwari 2015, 2018; Anand 2014), prompting India’s National Curriculum Framework to call for teacher beliefs to move towards student-centeredness (NCERT 2005, p.82). Generally speaking, at least in the public system, it is less likely that teachers would hold constructivist beliefs and be highly active users of technology (Ertmer et al. 2015). This context demands that those in charge of introducing technology in the public system be aware of how the assumptions made by content developers and trainers interact with the traditional beliefs of teachers. Tondeur et al. (2017) note that over time, teachers’ use of technology in their practice would lead them to adopt more student-centered beliefs, which in turn would influence technology integration. However, the role of the beliefs of those in charge of designing and implementing ICT-led interventions in the public system—part of the contextual factors (Tondeur et al. 2017), may play a significant role in supporting or hindering such technology integration in the public system. Even an idea such as gamification for which there is positive evidence (Deterding et al. 2011; Dicheva et al. 2015) can easily be subverted as shown in this paper if the developers are not aware of a functional integration of pedagogy and play (Tulloch 2014).

Governments in countries such as India have come to rely on vendors and other agencies not just for the hardware but also for educational material (Gurumurthy 2015), mainly because of a lack of content development capability within. Ensuring that the beliefs of content developers do not militate against technology integration needs a more broad-based consultative process involving a range of educational actors such as knowledgeable teachers, academics and other nongovernmental agencies dealing with ICT in education. Second, support for technology integration in the form of ongoing training is necessary. Such training should ensure that, regardless of the teacher-centered or student-centered assumptions that teachers may hold, technology can be worked into the pedagogical plans of the teachers. It should also help guard against the ICT-intervention reinforcing traditional beliefs, for instance, the dominance attributed to ‘knowing content’ and the textbook, or provoking resistance behaviors. A third important implication that arises is the need for governments to be aware of the requirements of a personalized learning system while considering technology integration in the future. Lee (2014) and Lee et al. (2018) discuss five features of such a system: (1) a personalized learning plan which was missing in the SCI case, (2) competency-based student progress rather than time-based progress and (3) criterion-referenced assessment rather than norm-referenced assessment, both of which were not built into SCI, (4) project- or problem-based learning, for which SCI offered no opportunity, and (5) multi-year mentoring of students. This last point is important, since current thinking, as in SCI, limits technology integration to selected grades. The SCI experience opens up the possibility of improving the design of technology-based learning interventions by incorporating a personalized learning focus.

In sum, greater awareness of the bi-directional relationships between technology integration and teacher beliefs, and of the processes by which pedagogical beliefs hinder technology integration or perceived belief-related barriers, as in the resistance behaviors (Tondeur et al. 2017), is necessary to ensure that learning processes are not hindered. This attention at the design stage must be complemented by rigorous attention to long-term professional development of teachers that is situated in the context of teachers’ work with technology (Sang et al. 2012; Tondeur et al. 2016; Kopcha 2010). Ultimately, how teachers respond to externally generated content and how teacher beliefs and practices influence technology integration in the classroom will determine the extent to which the cognitive and non-cognitive outcomes expected from ICT-led initiatives are realized.