Introduction

This paper focuses on the use of instructional videos in a large junior-level business course at a large, public, diverse, urban university in the western United States that uses a flipped classroom approach. Because many flipped classroom approaches rely heavily on pre-recorded lecture videos to present basic course content to students outside of classroom sessions, it is important to understand how these videos are used by students as resources for learning. Specific details about the flipped classroom design used in this course, the impact of the design on student academic performance, and student satisfaction with the course design have been reported elsewhere in two other articles. (Albert and Beatty 2014; Beatty and Albert 2016)

The first article described this flipped classroom design and assessed the impact of a flipped classroom vs. a lecture pedagogy on student grades. Results indicated that students in the flipped classroom approach had significantly better exam scores than those in the previous lecture class. The second article reported results from a student satisfaction survey implemented in the flipped class. Results of the survey showed several significant differences in student perceptions of a flipped classroom model between successful (grades A-C) and unsuccessful (grades D-F) students. Students who performed better in the class, based on test scores, were generally more positive about the flipped class experience than were students who were academically unsuccessful. Shorter lecture videos are preferred by, and therefore may be more effective for, a majority of students. However, this preference is less pronounced in more successful students.

The use of lecture videos has been well documented in literature describing common applications of lecture capture methodology and technology (Karnad 2013). In many flipped classroom designs, viewing pre-recorded lecture videos is an essential activity for students preparing for a classroom session. However, very few studies report data about the use of lecture videos beyond student and faculty perceptions of use and effectiveness. See McNulty et al. 2009 for one study reporting descriptive statistics about actual use of videos by students. The use of flipped classroom approaches is also well-documented in academic literature. See O’Flaherty and Phillips 2015 for a summary of several dozen case reports from multiple countries. However, detailed descriptions and data about the student use of lecture videos in flipped classrooms have not yet been widely reported. Therefore, this study begins to fill this gap with a report from our local context. Since we also have data regarding student academic performance and satisfaction with the course design, we are able to analyze patterns in video use and assess potential correlations among student video use, academic performance and satisfaction with course design elements.

A Brief History of the Flipped Classroom

The term “flipped classroom” has become a popular label for a hybrid, or blended, instructional environment that implements information delivery activities outside of the regular class environment (time and place) and focuses the use of the traditional class environment on active learning pedagogy, such as interactive discussion, small group work, and other methods of student engagement. (Bergman 2013; Bergmann and Sams 2012; Hughes 2012; Strayer 2012)

Though the flipped classroom pedagogy isn’t new (Bonwell and Eison 1991; King 1993; Lage et al. 2000; Strayer 2007), the past decade has seen a growing emphasis on applying active learning pedagogy and instructional approaches to improve student learning. Most educational environments are still centered around traditional classroom activities, with information delivery to students occurring by a teacher lecturing at an established and regular time and place and active learning happening (or not) outside of class as students complete homework assignments. Over the years, students with strong independent learning skills and resources have found success in this model. Many students, unfortunately, have also failed to learn effectively in this model. By moving the more difficult active learning activities into the classroom environment, educators hope to provide a more supportive and effective environment so that more students can learn content more deeply. Active learning takes time, of course, and with classroom time largely fixed, the logical way to make time for active learning is to move basic information delivery out of the classroom. Since most active learning approaches require students to have a basic level of content understanding, the information delivery activities must occur before the active learning experience. In essence, the traditional delivery model is “flipped”. (Strayer 2012) It has been argued that a flipped classroom methodology creates a more learner-centered environment, which supports students better in actively constructing knowledge (Huba and Freed 2000; Michael 2006), building important higher order thinking skills. (Bloom 1956)

Use of Content Videos in Flipped Classrooms

Most flipped classroom designs provide instructional content videos (lectures) for students to review prior to attending live class sessions held on campus in a traditional classroom (or perhaps laboratory) environment. Since activities in the live class session rely on students’ understandings of the content presented in the video (and perhaps also provided in other types of resources such as readings), student use (viewing and learning from) of the videos should be an important step in the learning process.

Most studies describing flipped classroom practice do not report extensively on the use of videos by students, and tend to focus on higher level analysis of student academic performance, student satisfaction, and perhaps even instructor satisfaction. (Albert and Beatty 2014; Beatty and Albert 2016; Bishop & Verlenger 2013; Chua and Lateef 2014; Enfield 2013; Strayer 2012; U.S. Department of Education 2010). If the use of content videos is an important, or even critical, element of the flipped classroom, we need to better understand how students use videos – if and how they watch them, if and how the use of video correlates to academic performance and student satisfaction. With this understanding, instructors and designers will be better equipped to implement instructional videos in a flipped classroom as important learning resources for students.

Study Context

A flipped classroom model was applied to an Introduction to Management course at a large, urban, highly diverse, AACSB-accredited Business School for the first time in the Fall 2013 semester. This Introduction to Management course (MGMT) is one of 12 core-courses required of all 5200 business majors in the university, and is taken during their junior year. For several years prior to offering the flipped version of the class, the course was taught by the same instructor using a more traditional large lecture style. Although video cases were used occasionally in the large lecture class, and there was some interactive student discussion of work experiences in the classroom, the course was primarily delivered in a passive lecture format.

The use of flipped classroom approaches at this university in general, and in the Business School more specifically, is still relatively uncommon; this MGMT course was the first flipped class in its department. A primary reason the instructor chose to implement a flipped classroom design was the provision of a convenient university-supported Echo360 lecture capture system (locally branded as “CourseStream”) with an intuitive “personal capture” feature (see Fig. 1). The personal capture feature of lecture capture solutions allows instructors to prerecord audio, video and computer screens (presentation material) and provide it to students quickly and conveniently through the Learning Management System (LMS). The LMS at this university is Moodle, locally branded as “iLearn”.

Fig. 1
figure 1

Example of CourseStream video player

Flipped Course Design

Like most classes at this university, this class is taught over a traditional 15-week semester and addresses one major topic each week. In this class, students are assigned textbook readings and several short video lectures to review during each week of the course. Students were expected to complete these reading and video assignments before attending each class. Content videos were captured and produced by the course instructor using the Echo360 personal capture software and a laptop computer provided by the university. The videos were created the semester before the course began. All videos were based on the identical content that had been used by the course instructor for the previous two years. Each video chapter segment summarizes the week-by-week lecture material. Average time for each of the 15 weekly video chapter topics was 76 min; organized into 2, 3, or 4 video segments for each chapter. All video segments were available at the beginning of the semester and remained available throughout the semester.

Students were required to meet in person in a large lecture hall twice weekly (seventy-five minutes each time) for in-class active learning experiences, often interactive discussion focused on applications of key-concepts from one’s work experience or gathered through brief interviews with managers. This class met on a Tuesday/Thursday 9:35–10:50 am schedule. In all, the instructor used four types of content to promote active learning through discussion of the assigned readings and viewed video lectures: 1) application questions of key concepts for each chapter that appeared in the Course Notebook; 2) additional video cases with application questions; 3) professional movie clips demonstrating key concepts, and 4) other multimedia material created by the author or edited from business oriented cable channels. Of these four, discussion of the application questions accounted for 50 – 65% of in-class time. All readings, videos, and other discussion resources were provided via the class website hosted by the university LMS.

Research Questions

When we consider the effectiveness of the lecture videos, we need to consider that although videos are available to students, and students are expected to watch the videos, it may be that some students do not schedule time on a weekly basis to watch the videos. Just as some students cram for an exam, some may also “binge” on watching videos that focus on five chapters a few days before the exam. (See Phillips et al. 2010 for a description of common patterns among lecture video viewers.) Or, perhaps some students do not watch the videos at all, or only watch some of the videos segments that comprise a chapter. This study explores patterns of video-viewing behavior and potential correlations to student academic performance on exams and self-reported satisfaction with the overall flipped course design and video resources.

In this study, answers were sought to five main research questions:

  1. 1.

    How many students watch assigned lecture videos? How often?

  2. 2.

    How do students’ video viewing patterns (opening video links) change over the semester? (e.g., hour of day, day of week, week of term)

  3. 3.

    How do students’ video viewing patterns change over a single topic?

  4. 4.

    What is the relationship between students’ video viewing patterns and learning?

  5. 5.

    How do students’ video viewing patterns correlate to their satisfaction with the overall learning experience?

Methodology

Student use of videos was determined solely through the analysis of server logs from the learning management system (LMS). These logs show the date and time that video links were opened by students, and indicate solely that the video was opened and (we assume) viewed. LMS server (or system) logs do not allow us to determine whether or not students were actively engaged in learning from the videos. System logs are accessible to all instructors, who can select and view activity logs for all or selected:

  • students

  • activities (or links)

  • days of the class.

System logs identify the user account, IP address in use, activity (LMS link clicked), and date/time stamp. To analyze student use of videos, we viewed and downloaded server log files for all students, all video activities, for all days of the class term. Using Excel, we counted video views per student for each video segment. This allowed us to identify the general patterns in video viewing behavior reported in the Findings section of this report.

For each exam, we identified the median score and divided students into high performance (at or above the median score) and low performance (below the median score) categories. We also counted the number of video views associated with each major exam in the class, which allowed us to look for video use and exam performance correlations. Because we also had data about student satisfaction (reported in Beatty and Albert 2016), we were able to look for correlations between video viewing and satisfaction with various course design elements.

Next we report findings, which answer the study’s research questions.

Findings

In this section, data is presented about student use of videos in one section of an upper division management course in the College of Business at a large urban university in the United States. In this study, the term “video viewing patterns” refers to the day and time that video links were opened through the LMS system.

Answering Q1: How Many Students Watch Lecture Videos?

Figure 2 shows the number of students who watched videos each week of the semester. Overall, over 8000 distinct video views were recorded in the LMS server logs. All students watched some videos throughout the semester, with an average of 28 video views per student and a median number of views of 18. Though not all students viewed all videos, there was a substantial amount of student time spent on video viewing. If we assume an average video length of 20 min, the entire class of students potentially watched over 2500 h of lecture videos during the semester.

Fig. 2
figure 2

Total and average number of video views per video by week of class

Answering Q2: How do students’ Video Viewing Patterns Change Over Time?

Figure 2 also shows the average number of video views per video per week over the course of the semester. At the beginning of the course (week 0), with n = 307, over 700 video views per video were logged. As the semester progressed, the number of video views per video dropped to approximately 200 per week, and varied considerably from week to week between 100 and 300 views.

Figure 3 shows the average number of video views per day of the week over the course of the semester. Not surprisingly, Monday through Thursday were the most popular days to view videos. The peak viewing day of the week was Wednesday, with over 3000 video views. Friday, Saturday and Sunday all had less than 1000 video views on each day during the semester. Scheduled class meetings were held on Tuesday and Thursday each week, so it appears that students watched videos to prepare for class sessions on days immediately prior to class sessions, or to “catch up” on content immediately after a class session and discussion.

Fig. 3
figure 3

Total video views per day of week

Figure 4 shows the total number of video views per hour of day over the course of the semester. Not surprisingly, the hours of 9:00 am through 11:00 pm (hour 23 in Figure 4) were the most popular times to view videos. The peak viewing time of day was 9:00 pm, with over 800 video views. The hours of 1:00 am through 8:00 am all had less than 300 video views per hour during the semester.

Fig. 4
figure 4

Student views of videos by hour of day

Answering Q3: How do students’ Video Viewing Patterns Change Over a Topic?

Table 1 shows the patterns in student video viewing for topics with multiple video presentations per topic. The total number of video views (N) is presented for each video part, and the % of the total number of part 1 video views is reported for parts 2, 3, and 4 (%N1). One topic was presented in each week of the class. Topic videos were presented in either two, three, or four parts. Topics with two videos were presented in weeks 3, 9, 10, 11 and 12. Topics with three videos were presented in weeks 0, 4, 5, 13, and 14. Topics with four videos were presented in weeks 1, 2, 6, 7, and 8.

Table 1 Number of views for multi-part videos

Topics with two videos were presented in weeks 3, 9, 10, 11 and 12.

  • first videos received the majority of views, with 1150 total

  • second videos received about 71% as many views as first videos (800)

Topics with three videos were presented in weeks 0, 4, 5, 13, and 14.

  • first videos received the vast majority of views, with 2300 total

  • second videos received about 51% as many views as first videos (1200)

  • third videos received about 42% as many views as first videos (950)

Topics with four videos were presented in weeks 1, 2, 6, 7, and 8.

  • first videos received the majority of views, with 1500 total

  • second videos received about 67% as many views as first videos (1020)

  • third videos received about 58% as many views as first videos (830)

  • fourth videos received about 52% as many views as first videos (800)

There is a clear trend in student video-viewing pattern of fewer students watching the 2nd, 3rd, and/or 4th parts of the video sequence for each topic. This trend is very similar to the overall trend in video viewing across the semester: as time goes on, fewer students watch videos. Overall, approximately half of the students who watched the first video in a sequence watch all the parts of the video sequence. This is also consistent with the video-viewing trend across the semester.

Answering Q4: How do students’ Video Viewing Patterns Impact Learning?

A comprehensive explanation of student academic performance as measured by exam and overall scores is reported in Albert and Beatty (2014). Albert and Beatty (2104) explored the difference in student academic performance between a traditional and flipped versions of this course. Significant improvements in academic performance were measured in the flipped version of the course. Table 2 shows the descriptive statistics for exam grades in the flipped version of the course.

Table 2 Student scores on three exams and overall

We identified high performing (at or above the median score) and low performing (below the median score) students for each of three exams and overall score. In Table 3, we identify the number of students in each performance category for each exam who watched some videos or no videos associated with the respective exam. We also identify the number of students in each performance category for the overall score who watched all videos or no videos.

Table 3 Number of students who watched videos

Looking at video viewing associated with the overall course score, 48% of high performing students watched all videos and 34% of low performing students watched all videos. Since all students watched at least one video, there are no students in either the high or low performing groups who watched no videos.

For videos associated with exam 1: all high performing student watched at least one video. Ninety-five percent of low performing students watched some videos associated with exam 1 and five percent of low performing student watched no videos.

For videos associated with exam 2: sixty-six percent of high performing students watched some videos and thirty-four percent of high performing student watched no videos. Sixty-seven percent of low performing students watched some videos associated with exam 2 and thirty-three percent of low performing students watched no videos.

For videos associated with exam 3: Forty-nine percent of high performing students watched some videos and 51% of high performing student watched no videos. Forty-four percent of low performing students watched some videos associated with exam 3 and 56% percent of low performing student watched no videos.

Answering Q5: How do students’ Video Viewing Patterns Correlate With Satisfaction Ratings?

Table 4 shows the correlations among the overall student evaluation score, the scores for survey components 1 and 2, the total number of student video views, and whether or not the student viewed all videos in the course. Component 1 consists of items associated with the specific activities implemented in this flipped class instruction, such as in-class discussions and application exercises. Component 2 consists of items associated with the length and perceived value of the pre-recorded lecture videos. (See Beatty and Albert 2016 for a full explanation of the survey components.)

Table 4 Correlations among evaluation scores and video viewing

The significant positive correlations between the overall survey score and components 1 and 2 scores are expected and establishes the consistency across two measures of exploring students’ perception about flipped classroom activities. There is also a significant positive correlation between the total number of video views and student’s viewing all videos. Perhaps the most interesting correlation is between the survey component 2 score and the total number of views. Students who responded positively to survey questions regarding the lecture video length and usefulness also tended to view more videos. If students find value in viewing videos of appropriate length, they are more likely to view more videos. Though a relatively simple concept, it highlights the importance of designing value into the lecture videos and taking care to manage the length of each video to that considered appropriate by students.

Discussion

The findings presented in this report lead us to several key conclusions regarding the use of student videos in this flipped classroom. First, we notice that the video-viewing pattern varies widely across students. Many students watched all assigned videos and many students watched almost no videos; all students watched at least one video. Many low performing students watched quite a few videos, but it’s not clear how much those videos helped their learning (as measured by exam score). Over the course of the semester, as shown in Table 3, the number of students (both high- and low-performing) watching no videos for exams 2 and 3 increased dramatically. It is possible that many students did not expect that watching videos would help them on the exams. At the same time, it is likely that many students did find value in watching the videos, since approximately 50% or more students watched at least some of the videos for each of the exams.

Second, even though there is a small positive correlation between the total number of video views and student satisfaction with the lecture video resource, there is no strong connection between video viewing and exam performance in this case. It is likely that the analysis measures are at too high a level to determine specifically what, if any, lecture video viewing contributed to student learning. Since this study relied completely and solely on server logs of video link clicks, the video viewing data doesn’t “say” anything about student learning from the video. It only tells us that a student opened a video file, not whether the student actually watched the video, understood the content, or learned anything useful about the course topic. Server systems that measure the time-based streaming for each chapter video segment could provide more precise information to analyze correlations between student use of video and performance on exams.

Third, student video viewing was lower for 2nd, 3rd, or 4th parts of a multi-video sequence for a chapter. Designers of instructional videos could learn from this finding to either 1) frontload the most important content for a topic into the beginning part of each video sequence, or 2) design each video part so that it uniquely contains important content, or 3) sub-divide chapter units into smaller units which would each be associated with fewer videos. If each part of the video contains an element of important content that is not available in another information resource, and this design approach is explained to students, then students may be more likely to view each video. Designing every video so that it includes uniquely important content may also increase the persistence of students in watching videos over an entire term.

Finally, instructors may find it useful to know when students are viewing videos during a class week or during the day. In this study, the day between the two scheduled class sessions was by far the most popular video viewing day. Perhaps some students were “catching up” on video topics discussed on Tuesday and others were preparing for the next class session and discussion on Thursday, or both. Knowing that most students are actively viewing videos between the scheduled class sessions in a given week may be important for an instructor’s course design and lesson planning. Informational messages and reminders to students could be timed based on their expected or actual viewing behavior. Adjustments to class activities or content – whether due to student behavior or modifications to the overall class design – could also be timed based on expected or actual student video viewing behavior. This might be useful when planned in-class activities are dependent on students completing a video viewing assignment, or when time-dependent new content or activities are being assigned in the online environment to students and the instructor wants to make sure most students receive the new information sooner rather than later.

Instructors could release videos at the point of the semester when the video content is needed (for a particular unit or week) rather than pre-loading all videos at the start of a semester. A message to students could highlight key points or mention something special to watch in the video (s). This might create anticipation among students for each week’s new video content and build curiosity. Increased curiosity may lead to more video viewing.

Instructors might also consider that the emerging trend toward “binge watching” of newly released television series among millennials and other demographics may predispose some college students to “binge watch” or cram class videos just as some rely on cram studying to prepare for major exams. It may be that video binging will become a common student reality just as traditional cramming for exams has been for many years.

Several other design recommendations that might fit an instructional context: 1) send out automatic reminders to those students who have not viewed the videos by a particular day or time, since students who are not self-motivated to watch videos (or complete other preparatory activities) may behave differently with personalized emails coming from the instructor, 2) use gamification strategies such as badges, leader boards or points to increase student engagement and video viewing, 3) build interactive components into videos such as quiz (or future exam) questions, poll-like questions, or other “interaction points” that require student action before the video continues, and 4) consider giving the students choices to view videos. If they don’t have to watch all the videos but are encouraged to select videos based on meaningful criteria (class performance, availability of alternative content resources, etc.) or specific (challenging or interesting) content. This may provide more student control (autonomy) in watching lecture videos.

Future Research

The use of video to present content in flipped classes is an important and popular approach used in a growing number of higher education institutions. Though production and delivery costs have lessened significantly as technology has advanced over the past several decades, there are still significant costs associated with this approach, including design and development efforts, lecture capture technologies, and delivery systems and network bandwidth. Students are asked to invest their [perhaps] most valuable resource – time – in watching these videos. Clearly, more research is needed to better understand how instructional videos impact student learning in various flipped classroom course designs. It is not enough just to know that students prefer (or do not prefer) to view content videos, and whether or not they view videos, how often they view videos, or when they view videos.

With the limitations of many LMS’s and video player systems, faculty may not be able to easily determine the characteristics of even basic student use of videos. Faculty (and instructional designers) need better information about which students are watching which videos, and even more importantly, how students are learning from those videos. New developments with some LMS and video playing systems are making it easier for faculty to get this information when they need it, and some provide performance information (feedback) to students as well. Figure 5 shows an example of the echo360 player now in use in some universities.

Fig. 5
figure 5

Current echo360 playback interface

This player allows faculty and designers to see quickly how many students have viewed a video, and which parts of the video have been viewed. The interface now provides data on a per video, per class, or per student basis, supporting a variety of valuable analyses to help faculty assess the effectiveness of their instructional videos. In this system, it is possible to connect asynchronous discussions directly to the video, and the interface displays data about discussion forum use as well as video viewing. And since the data is being collected by the LMS system, an analytics application could be used to further process and display information about student use of videos back to each student, perhaps comparing their performance with that of their peers or assigned and expected norms.

With new interfaces and more data available, further research should be more conveniently accessible to more faculty and instructional designers, and could help answer questions such as:

  • What length of video is most viewed completely?

  • How does the type of video (content, demonstration, application story, peer contribution, etc.) impact student viewing behavior?

  • If important content is included exclusively in videos, do students watch more videos more completely?

  • Does providing timely feedback about student viewing behavior to students impact their future viewing behavior?

  • How can instructors use timely feedback about student video viewing to modify in-class content and activities in a flipped classroom?

As faculty and researchers, we look forward to continuing this important work in our own flipped classes, learning from faculty peers in our local institution and sharing best practices with others in higher education.