Introduction and context

The Anglican Church Grammar School in Brisbane (usually known as Churchie) that was the site for our study, is an independent all-boys Kindergarten–12 school with plans for significant development of its learning environment over the next few years. This architectural development coincided with quickly-evolving sector-wide technological advancements that were challenging traditional teaching methods in this school. Churchie teachers wished to ensure that future building programs and investment in new technologies would benefit students by positively impacting future teaching pedagogies, student learning and student engagement.

To help inform these decisions, Churchie teachers embarked on a combined retro-fit and research strategy. The initial step was to trial and evaluate three types or ‘modes’ of learning spaces (Fisher 2006), with a particular focus on how they accommodate use of technology. Four research questions addressed this process: (1) Does the mode of the classroom impact teacher use of 1:1 technology? (2) Does the mode of the classroom have an effect on the perceptions of the quality of teaching? (3) Does altering the learning mode have an effect on student engagement? (4) Is there a link between the classroom learning space and student achievement in mathematics? Churchie teachers collaborated in this research with Melbourne University’s Learning Environments Applied Research Network (LEaRN), an industry/academic initiative established in 2009 by that university’s Faculties of Education and Architecture. LEaRN’s primary mandate has been to utilise high-quality research to underpin learning space design and use across all educational sectors.

This paper discusses the implementation and results of this collaborative project. In that initiative, three classes of grade 7 students at Churchie, together with their home-group teacher, spent a term in each of three classrooms that were re-modelled by the school prior to this research. The first classroom involved little change from the traditional teacher-centred classroom space that is common in this school. The second classroom was altered to facilitate student-centred learning, typified by group work and informal interchanges with teachers and other students. The third classroom combined the flexibility of traditional and casual furniture with the integration of digital and visual technologies to create a dynamic and interactive 360° or polycentric learning space (Monahan 2002; Lippman 2013). The aim was to create a highly-dynamic and -adaptive space, which was aligned with the affordance of digital and visual technologies to facilitate more personalised and responsive learning experiences. With the generous support of the teachers who used those rooms with the three classes, this arrangement provided the platform for a short but effective study addressing the issue of how teachers and students utilise space in their teaching and learning.

In recent years, rapid advancements in technology have coincided with Australian federal and state government school re-development initiatives aimed at significantly advancing innovative learning environments. The physical environments of many educational institutions either now have, or in the near future will have, three modes of learning spaces at their disposal: ‘formal’ or traditional classrooms focused on largely didactic pedagogy (Fisher’s 2006 ‘mode 1’ space); student-centred spaces focused on transactional approaches to instruction (mode 2); and the ‘third space’, where social activities overlap informal and active learning activities in spaces such as covered outdoor learning areas, hallway ‘nooks’ and lounge-styled rooms (mode 3) (Brooks 2011; Fisher 2003; Soja 1996). Combined with the recent proliferation of digital technology, particularly mobile technologies, wireless and e-learning systems, these new learning environments provide the infrastructure to inspire teachers to reconceptulise and rethink the possibilities of the teaching and learning equation (Alterator and Deed 2013). The adaptation of the three classrooms used in this study adhered to these three modalities.

Towards the end of 2010, Churchie was certainly moving in this direction, being a school where significant advancements in the use of digital technology was already established. Since 2009, this school has required students to own a tablet so that teachers can develop teaching resources, deliver curriculum and assess student knowledge on a 1:1 basis. Microsoft OneNote was used for real-time collaboration between multiple users, while DyKnow enabled teachers to connect to student tablets in ‘teacher-whole class’, ‘teacher-individual student’ and ‘student–student’ modalities Together, these software platforms allowed synchronous formative and summative assessment, which in turn enabled teachers to track student learning outcomes, give formative feedback and record student progress. However, at Churchie, these noteworthy initiatives occurred in learning environments that had changed little in the school’s 100-year life span. The classroom settings at the time this study were arranged in a ‘front-of-class’ orientation of desks and teacher equipment, tables were arranged to suit a didactic approach to instruction, and limited opportunities were provided for student interaction and group-focused activities.

Churchie was not alone in retaining century-old styles of learning environments. This is understandable because scant evidence exists to support that expensive fit-outs actually impact on the way in which teachers teach and how students learn (Brooks 2011; Walker et al. 2011); this paucity of knowledge motivated this study. Churchie required empirical evidence that investment in modern learning environments would be justified by their capacity to improve student engagement and learning outcomes. Scant evidence existed to make this case. Hattie’s (2009) synthesis of meta-analyses revealed a negligible effect for space on student learning outcomes. But this measure did not account for the teaching practices that such spaces could afford; simply changing the space was not enough. Literature reviews by Cleveland and Fisher (2014), Blackmore et al. (2011) and others lamented how few empirical studies addressed the paucity of information that Hattie emphasised, namely, the impact of learning environments on teaching performance and student experiences and outcomes. Painter et al. (2013) blamed this lack of knowledge on the complex variables inherent in education that make randomised control trials or good quasi-experimental studies difficult to implement. Alterator and Deed (2013) summarised the pervading sentiment among researchers on this topic by recommending qualitative studies that focus on affective issues. Unfortunately for Churchie, however, measures of effective outcomes were required to support changes within their school. This required the development of innovative methodological approaches to quantify the impact of learning spaces on teacher performance, student engagement and student learning outcomes.

The reason for this was common to all schools, not just Churchie. In recent years the re-conceptualising and inhabiting of new spaces had moved at an unprecedented pace, but teachers’ abilities to utilise them efficiently had not always matched this growth. A literature review (Abbassi and Fisher 2010) suggested that many teachers had poor ‘environmental competence’, thus limited capacity to know how to “… understand and effectively use physical instructional space for a pedagogical advantage” (Lackney 2008, p. 133). Even when in new learning spaces, teachers would persist with traditional elements of direct instruction, either because of limited awareness and ability to utilise the physical elements of the classrooms that they were given, or a tendency to over emphasise the need to ‘control’ student behaviour (Lackney 2008). Influenced by unprecedented spending on school infrastructure brought about by this country’s Building the Education Revolution (BER), teachers had been challenged to re-skill in order to maximise the instructional use of new learning environments. Given the paucity of empirical evidence on ‘what works’ this has been a daunting task—particularly when the factor of new technologies is added to the repertoire of skills to be mastered. This growth has occurred in a climate of little effective evaluation regarding how teachers utilise these spaces, and their impact on students’ learning and engagement. Our study addressed this shortfall in knowledge.

The study

Design and procedure

There existed a sense of serendipity in terms of the implementation of this project. It is rare in education that a quasi-experimental design of this nature can be implemented, given the complexity of issues inherent to the school setting. These issues, such as the need to maintain equality of learning experiences, ensuring teacher participation, and meeting the school’s operational requirements, often thwarts such research approach (Blackmore et al. 2011). In this instance, the school’s curriculum structure that involved year 7 students being predominantly taught in one room with (mostly) one teacher, the teachers’ willingness to participate, and a desire by the school to facilitate research on this topic created a scenario within which the project was successfully implemented.

Three modes of classrooms, occupied by 3 year seven classes being taught similar curricula, while each maintaining the same classroom teacher throughout each intervention, reduced extraneous variables to a considerable degree. This allowed the study to focus on differences caused by changes in the classroom designs (the interventions). The study utilised a powerful single-subject research design (SSRD), that is used regularly in applied medical sciences, for which repeated measures of identifiable actions by an individual are gathered across a number of interventions (Horner et al. 2005). The aim is to establish a functional relationship between the manipulation of these interventions and the subsequent effect on the dependent variables (Horner et al. 2005). This study established this relationship by selecting three classes, which acted as their own control, baseline and unit of analysis (Cakiroglu 2012; Horner et al. 2012). This negated the threats of between-subject variability (Horner et al. 2005) and the selection and testing internal validity threats (Rindskopf 2014).

A synthesised alternating treatment and withdrawal design collected empirical data across a baseline (mode 1) and two interventions (mode 2 and 3) (Cakiroglu 2012). In reality, this procedure was deceptively simple. Two year seven classes had their main classrooms re-modelled during the first term break, with one as a mode 2 classroom with ‘cluster’ table arrangements and other modifications, and with the other as a mode 3 classroom. The latter had installed multiple whiteboards, multiple TOWs (portable televisions on wheels) and a variety of non-traditional furniture. Architectural modifications to the actual rooms were not deemed important within the scope of this study (Fisher, personal correspondence). The third classroom was not altered; it remained in a ‘traditional’ mode 1 format, with individual tables and chairs arranged towards a teacher’s table and a white board at the front of the room.

Each class had a ‘home’ teacher who taught most subjects, with the exception of some specialist subjects (i.e. Art, Music and Physical Education). For the study, the classes, accompanied by their teachers, spent one term in each classroom during terms 2, 3 and 4. The swap between each classroom was considered a separate ‘intervention’. Students undertook a single 10-min survey, the Linking Teaching, Pedagogy and Space survey (LTPS), which was designed specifically for this project. This occurred in normal class time every 3 weeks during these three school terms. In addition, summative test results in mathematics were collated for later comparison. In this way, students and teachers provided repeated measures across time and three intervention phases, with differences in measures being attributable to the changes in classroom design.

Sampling

The school organised its approximately 170 grade seven students into six classes based on external standardised testing in mathematics. In the year of this study two classes were comprised of high ability, and the remaining four comprised of mixed ability mathematics students. From this pool the study design required three classes—C1, C2 and C3. One high and two mixed ability classes were chosen from the six based on their form teachers’ (T1, T2 and T3) agreement to participate. All students from these classes who returned informed consent forms were included in the study. The resulting sample of student (n = 52) represented a suitable participation rate (65 %) across the three classes. This number met Diggle et al. (2002) formula for determining effective sample size, when measuring effect of interventions across multiple time points and when calculating 95 % confidence intervals (CIs).

Data collection

We wished to find out if students’ standardised mathematics scores and responses to the LTPS survey altered as they ‘rotated’ between the three modes of classrooms each term. The argument was that any change in plotted data would be attributable to the type of space in which students studied for that term. Four dependent variables were used for this study, the first three of which structured the development of the LTPS, and are summarised together with independent variables in Table 1.

Table 1 Summary of dependent and independent variables

To determine independent variables, relevant educational theories currently in use in Australian schools were utilised to identify features that typified their presence in classrooms. For example, to identify if ‘good teaching’ had occurred (the dependent variable associated with Research Question 2), the survey involved independent variables structured on the Victorian Government’s well-accepted Principles of Learning and TeachingPoLT (State of Victoria 2007). To identify when a student was ‘engaged’ (the dependent variable associated with Research Question 3), the survey included questions based on engagement involving cognitive, emotional and behavioural actions (Fredricks et al. 2004). A test of causality for these variables was undertaken (Gray and Guppy 2007) to ensure that all critical factors within the guiding educational theories were accommodated within a logical progression in the survey. Survey questions were then developed that addressed each independent variable. The resulting survey was trialled with colleagues, refined to minimise time demands, tested on a sample of non-participating students, and uploaded to Survey Monkey.

The fourth research question was measured through summative student achievement obtained through the school’s year seven mathematics program. The data for this research question were obtained through synchronous summative assessment, using in-class examinations and problem-based investigations and group-based assignments. Individual student results determined at the end of each term were utilised because they were marked and moderated against standardised marking criteria across all classes.

Data analysis

Analysis of results for research questions 1, 2 and 3 was undertaken through the process of ‘visual graphic analysis’. Visual analysis, frequently utilised in SSRDs, is a proven mechanism for observing changes in level, trend and variability within and between the baseline and intervention periods (Byiers et al. 2012). It is able to derive a functional relationship between the independent and dependent variables, which is equivalent to paired t-tests (Bobrovitz and Ottenbacher 1998) and conveys the precision through which it has been estimated (Baguley 2009, 2012; Masson and Loftus 2003). Generally, visual analysis has centred on the analysis of a single case. However, a sufficient sample size and high-retention rate (92 %) in our study allowed the students in each class to be grouped and summed as a single subject (Ivankova et al. 2006).

This visual analysis of class means, with the application of 95 % CIs, was used to evaluate the true effect of the intervention. This was an improvement over the traditional single point analysis based on the recommendations of Baguley (2009) and Nourbakhsh and Ottenbacher (1994). The application of 95 % CIs provided an implicit visual representation, whether or not a ‘statistically significant’ difference between the means emerged, for rejecting or accepting the null hypothesis (Cumming 2007, 2009). The criterion used to determine statistically significant differences was adapted from the literature (Baguley 2012; Masson and Loftus 2003) and consisted of mean level change and trend, overlap of confidence levels, and trend and variability of confidence intervals. The work of Cumming (2007, 2009) around CIs was used to inform statistically significant differences through the interpretation of ‘non-overlapping’ (NO) (p ≤ 0.001), just touching (p = 0.01) and up to overlapping of half of one CI arm (p ≤ 0.05). Items that were intended to measure each of the 17 independent variables were subjected to biserial correlation item analysis, conducted post hoc, which allowed the analysis of results according to the research questions.

For Research Question 4, between-group comparison of summative student achievement of the three participating classes (C1, C2 and C3) was tracked against their ‘like-ability’ non-participating classes (N1, N2 and N3). These classes were matched on the basis of external standardised cognitive ability test scores. Student achievement in the different modal classrooms against the baseline data was compared and contrasted through factor analysis, Cohen’s d effect size calculations and single-sample t-tests.

Procedure

The research was conducted with little variation from the approach previously described. Students and teachers willingly enlisted, and during the term 1 break, the classrooms were modified. Teachers and students rotated between the three classrooms at the following term 2 and term 3 breaks, resulting in each student and home teacher spending one term in each of the unmodified (mode 1), mode 2 and mode 3 classrooms. The LTPS was conducted every 3 weeks via Survey Monkey, and the principal researcher conducted 1-h interviews about the ‘experience’ with each staff member at the end of each intervention. As a result, the researchers collected a wealth of quantitative and qualitative information. Of these, this article focuses on the former. The survey had embedded within its 38 items data relating to 17 independent variables (see Table 1). With a high retention rate across the term of the study, the LTPS generated over 17,000 datum. Within the scope of this paper, because these cannot be reported in full, the following section clusters results around the three issues critical to this study—did the survey indicate that the mode of classroom made a difference to teacher use of ICT, to teacher and student perceptions of the quality of teaching, and to student engagement? The section finishes with a report of the separate statistical analysis of mathematics scores.

Results

Did the mode of classroom impact teachers’ use of ICT?

The short answer to this question is ‘yes’ and, in two of the three classes, this effect was statistically significant (Table 2). The LTPS questions on ICT use focused on effectiveness (A1), incidence (A2) and flexibility (A3) of technology in the lessons being taught. When plotted and analysed visually, students’ perceptions of the effectiveness, incidence and flexibility of ICT use in lessons improved consistently across the term in the mode 3 space—the only mode to record this trend. Often starting at a common position in all three classroom modes, student perceptions of use of ICT became quite different by the end of the term, with observable improvements only in the informal space. Something happened in the mode 3 space within all three classes, and it was positive.

Table 2 Summary of visual analysis for changes in student perceptions of the effectiveness and relevance and flexibility in use of ICT between mode 1 (baseline) and mode 2 and 3 (intervention) spaces

Figure 1 provides one visual plot (in this case, ‘effectiveness’) as an example of the trend seen also in ‘relevance’ and flexibility’. The 95 % CIs are represented by the solid vertical ‘whiskers’ and, when overlapping up to half of one CI arm, indicate a significant difference (p ≤ 0.05) (Cumming 2007, 2009).

Fig. 1
figure 1

Effectiveness of 1:1 technology C1, C2 and C3 class average, with 95 % CIs, for each classroom mode

Did the mode of classroom have an effect on the perceptions of quality teaching?

This section of the survey instrument contained a total of 14 questions that addressed students’ perceptions about the way in which the changing classroom space influenced how their teacher taught and the pedagogy employed. The reliability of the significant number of variables (7) and items for this research question was determined using Cronbach’s alpha coefficient. The items that achieved the reliability criteria (0.8 ≤ α ≤ 0.95) outlined by Gliem and Gliem (2003) were ‘teacher ownership (B1)’, ‘fostering student self-regulation (B3)’ and ‘deeper levels of student understanding (B5)’.

One survey question addressed the B1 variable that focused on students’ perceptions of their teachers’ ownership of the curriculum. There was clear statistical difference attributable to non-overlapping confidence intervals of the mode 3 and mode 1 spaces for the C1 and C2 classes (Table 3). For class C1, there was also a clear statistical improvement for the mode 3 space when compared to the mode 2 space. For class C3, the average student response was higher for the mode 2 and 3 learning modes compared to the mode 1 environment at the end of each intervention, but the difference did not meet the criteria for statistical significance. In all classes, the length of time spent in the mode 3 space had a clear effect on students’ perceptions that their teachers appeared more focused on teaching well, because of a positive trend in each of the three classes.

Table 3 Summary of visual analysis for changes in student perceptions of teacher ownership, fostering self-regulation and deeper levels of thinking between mode 1 (baseline) and mode 2 and 3 (intervention) spaces

Three survey questions addressed the B3 variable that focused on a teacher’s capacity to foster student self-regulation. The questions in the LPTS instrument were focused around the elements of ‘foster student independence’, ‘interdependence’ and ‘self-motivation’. The aim was to determine if a change in learning modality enabled teachers to make the hypothesised shift in practice from teacher-centric to student-centred pedagogy. The clear statistical difference at the end of both interventions in all classes (Table 3) indicates that both the mode 2 and 3 spaces did have an effect on this element of teacher practice. It can be inferred that more student-centric and informal spaces facilitated learning experiences that required students to exercise choice in how they solved problems, to take more responsibility of their learning, and ultimately to be more engaged. Interestly, in class C2, students indicated a perception of more student-centred learning at the end of the mode 2 intervention compared to the mode 3.

Two survey questions addressed the B5 variable that was focused on challenging students towards ‘deeper levels of thinking’. These questions focused on the degree to which student learning outcomes was extended and the level of encouragement that students experienced regarding using their initiative. The C1 and C2 classes again recorded a statistically–significant increase in positive student response at the end of the mode 3 intervention compared to the mode 1 (Table 3). It was noticeable that, for class C2, the sharp increase in the student response at the end of the mode 2 interventions was similar to that observed for other variables that were tested. Students from class C3 recorded a similar trend of responses in the mode 3 space, as they did for the B1 variable. However, unlike the B1 variable, C3 class responses in the mode 2 spaces were not only higher, but statistically significantly so, when compared to the mode 1 spaces.

Did altering the learning mode have an effect on student engagement?

For the final section of the student survey, there were a total of 10 questions that addressed six independent variables (Table 1). These variables focused on students’ perceptions of the influence of the changing classroom space on their level of engagement. Because of the number of variables and the alpha reliability, this paper focuses on ‘positive student attitudes (C1)’, ‘willingness to take on a challenge (C5)’ and ‘willingness to work beyond ZPD (C6)’.

Two survey questions explored the impact of the space on students’ positive attitudes (C1) by ascertaining their levels of enjoyment of and positivity about their learning. There was clear a statistical difference attributable to non-overlapping confidence intervals of mode 3 and mode 1 for all three classes (Table 4). For this domain, all classes in the mode 3 space at each collection point across the three interventions recorded strongly favourable student responses with all average student response around the ‘Agree’ level. Finally, the length of time that students spent in the mode 2 and 3 spaces had a direct correlation with their level of engagement, through a constant positive trend in the perception of the students in these spaces.

Table 4 Summary of visual analysis for changes in student attitude and willingness to take on challenges and work beyond the limit of their expertise between mode 1 (baseline) and mode 2 and 3 (intervention) spaces

Three survey questions addressed the C5 variable concerning students’ willingness to take on a challenge in each of the different classroom modes. These questions involved students’ work ethic when being asked to master new knowledge, their motivation to gain good grades, and their willingness to solve problems in unconventional or innovative ways. This was the only variable for which all three classes achieved statistically–significantly higher responses in mode 2 and 3 spaces when compared to the mode 1 layout. Interestingly, for the C3 class, the length of time that students spent in the mode 1 classroom space had a negative effect on the students’ willingness to take on a challenge through its negative trend.

One question addressed the C6 variable that focused on the students’ willingness to work beyond their ZPD or limit of expertise. This question examined the ability of the students to work on tasks that were difficult or made them think. Unlike the C5 variable, the responses to this variable were not statistically different in the three modes. Only for class C1 was there a statistically–significant increase in the mode 3 intervention compared to the mode 1 (Table 4). Furthermore, unlike other variables, there was no consistent trend between the length of time in each space and student perceptions.

Is there a link between the space and student achievement in mathematics?

Analysis of results for the fourth research question involved all grade 7 assignments and tests. To account for the confounding variables of individual cognitive ability and class composition, external standaridsed ability scores were obtained through the Academic Assessment Services (AAS) testing instrument. The average AAS testing ‘mathematics reasoning’ scores for each class enabled matched between-group comparison of the test (C1, C2 and C3) and non-participating classes (N1, N2 and N3). The non-participating students who had no interaction with any of the rooms being used in the survey (all used mode 1 styled rooms for the full year). Comparison of AAS means enabled the comparison between; C1 (M = 45.57) and N1 (M = 45.68); C2 (M = 31.98) and N2 (M = 31.77); and C3 (M = 31.63) and N3 (M = 31.11). The AAS data were utilised in the construction of the high-ability mathematics C1 and N1 classes, with the heterogenous C2, C3, N2 and N3.

High-ability class comparison

The C1 class results in each of the learning modalities were compared with those of the other non-participating high-ability control class N1 (Table 5). Both classes were composed of students of similar cognitive ability as measured by the AAS testing instrument. Students were assigned to each class based on by their Language other than English selection at the start of the year. The additional confounding variables of assessment type and instrument and curriculum were controlled, with only the variable of the classroom teacher not being kept constant.

Table 5 Summary of statistical measures comparing learning outcomes of high-ability C1 non-participating N1 classes

Statistical analysis indicated that, throughout the study, there was a clear difference in student achievement between these two classes. Even though these classes were comprised of students of similar cognitive ability, the C1 class consistently outperformed their peers. The differences between the means for the C1 and N1 classes were statistically significant (p < 0.05) in each of the three modes, which was supported by ‘very large’ effect sizes. This clear and consistent outperformance by C1 compared to N1 suggests that the difference in teacher, and therefore approach, relationships and pedagogical approach, was an influencing factor. However, there was slightly stronger outperformance in the mode 2 and 3 classroom layouts. This could suggest that, in these spaces and based on the student survey responses, that the teacher was able to facilitate slightly-different uses of digital technology and pedagogical practices and/or student groupings, which could account for this trend.

Mixed-ability class comparison

The C2 and C3 class results in each of the learning modalities was compared to those of the other non-participating, mixed-ability N2 and N3 classes (Tables 6, 7, respectively). These more mixed-ability classes were comprised a wider spectrum of student cognitive abilities. However, they were taught the same curriculum and completed the same assessment instruments as the high-ability classes (C1 and N1).

Table 6 Summary of statistical measures comparing learning outcomes of mixed-ability C2 non-participating N2 classes
Table 7 Summary of statistical measures comparing learning outcomes of mixed-ability C3 non-participating N3 classes

As with the high-ability class C1, the mixed-ability C2 class clearly and consistently outperformed their like-ability peers in class N2 throughout the study. These classes were of similar cognitive ability. The classes were comprised of students of similar cognitive ability using the AAS external data. The difference between the means for the C2 and N2 classes were statistically significant (p < 0.05) in each of the three modes, with similar results in the mode 1 and 2 space. The largest effect size (d = 1.30) was achieved in the mode 3 classroom layout. However, like the earlier comparison, the outperformance by the C2 class in all modes reinforces the significant effect of the teacher.

In contrast to C1 and C2, class C3 achieved results that were lower than their like-ability peers in the N3 class. This underperformance was fairly consistent throughout the study and mirrored this class’s responses to the LPTS survey. The closest result between the C3 and N3 class was achieved in the mode 3 space. This result differed from the mode 1 and 2 results and could be attributed to the statistically–significant difference in the C3 class’s response to the questions that focused on ‘student attitude’ (C1).

In summary, the comparison of the mathematics assessment data of classes C1, C2 and C3 against their like-ability peers, on the same instrument and assessing the same curriculum, suggest that the arrangement of space did have an effect on student learning outcomes. Of significant interest is the performance of C2 and C3 and, to a lesser degree, C1 in the mode 3 space. This correlation across these classes and their teachers suggests that the informal and adaptive mode 3 space, aligned with the affordance of digital and visual technologies, had an effect on both teaching and learning. The space appeared to influence the teachers’ pedagogical approach, which had an effect on student learning experiences and therefore student attitude.

Conclusion

This novel study revealed sound quantitative evidence to contribute to the growing argument that the arrangement of the physical learning space does matter. It was found that a more dynamic and adaptive space, aligned with the affordances of technology, had a positive and sometimes significant effect on student perceptions concerning their learning. This was determined through a robust design that combined visual analysis, single-sample t-tests and Cohen’s d effect size calculations of SSRD data in the determination of a statistically–significant effect of the change in modalities.

From a student perspective, it was clear that technology was a valuable and relevant learning tool in each classroom setting. However, when students utilised technology in the more dynamic and flexible space, something changed in the manner in which they and their teachers utilised technology. This noticeable change was exemplified by the overwhelmingly positive student responses in relation to the importance of technology as a valuable learning tool whilst students were in the mode 3 space.

The noticeable and positive change that was associated with the mode 3 space continued into the more teacher- and student-centric domains. For the C1 and C2 classes, and to a lesser extent for the C3 class, the mode 3 space appeared to have a significant and positive effect on a teacher’s pedagogical approach and therefore his/her students’ level of engagement. It could be argued that this ‘altered approach’, employed by the teachers of classes C1 and C2 in the mode 3 space, was a reason why they outperformed like-ability peers.

Even though the study into the effect of learning spaces on both students and teachers remains in its infancy, it appears from this study and the corresponding literature that there exists a link between classroom space and changes to teaching and learning. However, the findings do not indicate causality. Improvements in areas of technology efficiency and relevance, teachers’ pedagogy, and student engagement and learning outcomes in the mode 3 space (and to a lesser extent the mode 2 space) are noteworthy. However, they remain largely conjecture until further studies can better isolate the impact of specific pedagogies—a significant challenge. Despite this, our study makes a significant contribution to this discussion through its focus on both students and teachers in a middle-school setting, and its attempt to regulate external variables through its SSRD.