Keywords

Introduction

One of the most important societal advancements of the 21st century is the rise of information and communication technology (ICT), which has fundamentally transformed our daily lives. We find dining places via smart phones, chat with strangers in a virtual world, and seek information through a large collection of social networks. Despite all the exciting new forms of living, our college science education remains relatively unchanged (Deslauriers, Schelew, & Wieman, 2011; Mazur, 2009) . Students sit quietly in large classrooms listening to lectures, complete individual labs following cookbook instruction, and take exams only to solve problems of no practical importance. It is the time to reconceptualize a new college science experience for all students (Mervis, 2013) . In this chapter, we propose a guiding framework that can help design coherent science instruction, curriculum, and assessments at the college level that meet the needs of the new digital era. The framework considers three interrelated core principles: (1) Set the development of lifelong learning skills (e.g., critical thinking, scientific reasoning, collaborative problem solving) for all students as a top priority; (2) incorporate multi-layered instructional supports using technologies; and (3) design new assessments for individual students that demonstrate and facilitate their growth of the lifelong learning capacity.

Reseting Learning Objectives

Learning objectives including content standards have been a common topic in any educational reform. This is more so in K-12 public education than in higher education. Many modern ideas on learning objectives can be traced back to Bloom’s taxonomy (Bloom et al., 1956) , which sets learning objectives for students in three domains: cognitive, affective, and psychomotor. For instance, within the cognitive domain, the learning objectives are placed along a hierarchy that includes (from the lower level to the higher level) knowledge, comprehension, application, analysis, synthesis, and evaluation. The higher-level objectives are often referred as higher-order thinking or higher-level skills.

More recently the 21st century skills have been proposed in various policy documents and reports (e.g., http://www.p21.org/). In order to synthesize the abundant and multifaceted works related to the 21st century skills , the committee of the recent NRC, (2012) report, Education for Life and Work: Developing Transferable Knowledge and Skills in the 21st Century, identified three broad domains of competence: cognitive, intrapersonal, and interpersonal. These are summarized in Table 1. These skills relevant to science learning may take various forms such as problem solving (Hodges, 2012) , scientific reasoning (e.g., Bao et al., 2009) , and critical and collaborative argumentation (e.g., Osborne, 2010) .

Table 1 The three domains of the 21st century competencies proposed in NRC, (2012)

The essence of these new learning objectives , we believe, is to prepare students as adaptive, lifelong learners. Therefore, the first principle we propose to reform college science education is setting the development of lifelong learning skills for all students as a top priority. This first principle is particularly relevant for college science education in the 21st century because of the knowledge expansion dilemma. On the one hand, there is a large body of basic (textbook) scientific knowledge distilled through a long human history for students to learn. Without mastering this body of basic knowledge, students can hardly move on to their next level of education or work and eventually (for some of them) contribute to the frontier scientific research and development. On the other hand, new knowledge advances faster than ever. It appears that students are never able to catch up with the knowledge expansion if the focus is on assimilating existing knowledge. Therefore, if students develop lifelong learning skills in schooling, they can continue their own learning after graduation.

These 21st century skills or higher-level learning objectives are often enduring and do not change rapidly. They serve to prepare students for future learning (Bransford & Schwartz, 1999) in the ever-changing world. They should not be decorative additives appearing in course syllabi. Instead, they should be infused in every single activity of the courses students take. These objectives may differ over time because of “society’s desire that all students attain levels of mastery—across multiple areas of skill and knowledge—that were previously unnecessary for individual success in education and the workplace” (NRC, 2012, p. 3). One particular new demand of the 21st century is the development of digital literacy (Lei, Shen, & Johnson, 2013) . In the next sections, we highlight technological resources for college science education including new forms of assessment that take advantages of technology.

Maximizing Instructional Support Through Technology

Instructional Practices Promoting Lifelong Learning

Froyd (2008) listed eight promising instructional practices in undergraduate STEM education. Based on these, we propose the following four instructional practices that may promote students’ lifelong learning skills:

  1. 1.

    Designing activities to engage and motivate students in active learning. The essence of this practice is to develop strategies to help students take more ownership and responsibility of their learning through making the classrooms more student-centered environments. These activities range from demonstrating interesting science phenomena, making science content relevant to students’ personal lives, extending learning to outside class realms, and linking science to other interesting academic disciplines or even entertainment.

  2. 2.

    Using scenario-based content organization. Scenario-based approaches refer to the wide range of instructional practices that organize learning materials for a longer period of time around one or more scenarios. These practices are often labeled as problem-based, project-based, case-based, inquiry-based, or challenge-based learning.

  3. 3.

    Organizing students in collaborative work. This practice is combined from the two separate practices proposed by Froyd (2008), organizing students in small groups and organizing students in learning communities. Collaborative work can happen in many forms including within a course or across multiple courses, in or after class, and through face-to-face or virtual interaction.

  4. 4.

    Conducting research. This practice aims to involve undergraduate students, typically advanced ones, in science research either in an established lab or under the supervision of a faculty member.

These practices are closely related with each other and have overlaps (Fig. 1). For instance, scenario-based approaches and collaboration are often considered as important ingredients in active student learning environments. Nonetheless, active learning can be individual-based and can occur in classes with more traditional ways of content organization. Interested readers can fill out the inner overlapping areas depicted in Fig. 1.

Fig. 1
figure 1

The interrelated instructional practices that promote lifelong learning

Technological Resources

Advanced technologies have made significant impacts on how students learn and how teachers teach (Lei et al., 2013; NSF Task Force on CyberLearning, 2008) . In this section we highlight a few technology resources that can augment the aforementioned instructional practices to prepare college students to be lifelong learners in the 21st century.

Personal Response Systems

Personal response systems, or clickers, have become a popular tool for large lectures in college science classrooms. The use of clickers is often accompanied with the instructional practice called Peer Instruction (Crouch & Mazur, 2001; Mazur, 1997) , a pedagogy developed to engage all students in large classrooms in college science courses. Peer Instruction uses conceptually challenging questions to engage students in scientific reasoning and argumentation. In a Peer Instruction session, students are typically presented with a conceptual question in multiple-choice format. After they spend a minute or two to think about the problem, they use clickers (or other alternatives such as flashcards) to submit their individual answers. The instructor then provides corresponding feedback or follow-up questions based on the distribution of students’ responses. For instance, if a large amount of students respond incorrectly, then the instructor can ask the students to discuss the problem with their neighbors (especially one with a different answer). The students then answer the question again before the instructor finally reveals and explains the answer.

Peer Instruction has shown success in improving college students’ conceptual learning and problem solving , and retaining students in STEM majors (Kalman, Milner-Bolotin, & Antimirova, 2010; Mazur, 2009; Watkins & Mazur, 2013) . Deslauriers et al. (2011) described a comparison study in which they measured the impact of deliberate practice (Ericsson, Krampe, & Tesch-Römer, 1993) in a large-enrollment introductory physics course. In the constructivism-based deliberate practice approach, students solve a series of challenging questions, make and test predictions, and critique their own and peers’ arguments during class time that require them to practice physicist-like habit of mind and receive frequent feedback from peers and the instructor. Clickers were used to aid students’ problem solving activities during class. Compared with a traditional lecture session taught by an experienced and highly rated instructor, in the 3-h intervention session taught by a trained but inexperienced instructor the students exhibited increased attendance, higher engagement, and much more conceptual learning.

Despite all the documented success, a major constraint of using clickers is that the instructor needs to develop a set of high quality and challenging questions for the students, similar to traditional approaches. This may drive students into thinking deeply about the subject matter on the one hand, but may inhibit students from developing the essential skill of raising critical questions, an inherent trait of a lifelong learner, on the other hand.

Computer Visualizations and Simulations

Computer visualizations and simulations (CVS) including computer-based modeling environments and virtual experiments have become popular instructional tools in science education at all levels (NRC, 2011; Scalise, Timms, Moorjani, Clark, & Holtermann, 2011; Shen, Lei, Chang, & Namdar, 2014) . One well-known example is the PhET Interactive Simulations developed at the University of Colorado, Boulder (http://phet.colorado.edu/). PhET simulations include various science (and math) topics covering elementary, secondary, and university levels. They not only visualize abstract and complex scientific phenomena, but also provide opportunities for students to interact with the simulations and therefore, practice inquiry learning (e.g., Lancaster, Moore, Parson, & Perkins, 2013; Wieman, Adams, & Perkins, 2008) . For example, Podolefsky, Perkins, and Adams (2010) examined how college students interacted with PhET simulations with minimal explicit guidance. They documented two cases on how students worked with a particular simulation – Wave Interference. Using this simulation, students may choose different objects to show, different measurement tools to use, and different variables to manipulate to make progress towards developing a scientific model of wave interference. Given the flexibility of the PhET simulation, the students followed different exploration paths, similar to how scientists investigate natural phenomena. Another good example is the ChemCollective (www.chemcollective.org) developed at Carnegie Mellon University. It is a collection of online activities including virtual labs, tutorials, and tests for general chemistry instruction. These virtual labs are designed to engage students in authentic chemistry problem-solving and complement algebraic computations for better conceptual understanding . Students’ engagement in ChemCollective has been shown to help identify misconceptions, facilitate deeper conceptual understanding, and predict posttest performance (Yaron, Karabinos, Lange, Greeno, & Leinhardt, 2010) . Taking a community approach, ChemCollective allows instructors from other institutions to contribute to the development of instructional materials.

With a workforce orientation, Stephens and Richey (2013) cautioned us that it is unlikely that computers and simulations will fully substitute for real world experiences. They observed that the new employee recently hired by the Boeing Company were generally good at using digital tools. However, many of them had rarely been put in situations where they had to create a product of value, and after training, were still weak in skills needed to manipulate materials effectively. Finkelstein et al. (2005) showed that well-designed computer simulations could be used productively in lieu of real laboratory equipment when they were used in proper contexts. The key factor that led to the success of their project was that the circuit simulation they used provided a variety of visual representations to make invisible physics concepts visible to students. de Jong, Linn, and Zacharia, (2013) reviewed the affordances and constraints of physical and virtual laboratories in science and engineering education, and recommended that:

…Combinations of virtual and physical laboratories offer advantages that neither one can fully achieve by itself…. Research on virtual and physical laboratories calls for nuanced decision-making…. Designers of instruction can improve outcomes by taking advantage of the affordances of each type of laboratory…. To design laboratories that take advantage of powerful guidance requires interdisciplinary teams involving domain experts, technologists, and learning scientists. (p. 308)

Computer-Supported Collaborative Learning

The works on computer-supported collaborative learning (CSCL) rise with information, network, and Web technologies (for a conceptual review, see Goodyear, Jones, & Thompson, 2014; Stahl, Koschmann, & Suthers, 2006) . This instructional approach focuses on developing computer-based learning environments that are built on a deep understanding of social structure, interaction, and dynamics with a relatively broader learning outcome in mind. Frisch, Jackson, and Murray (2013) described the WIKIed Biology course in which they infused Web 2.0 tools (del.i.cious, CiteULike, and Google docs and sites) to help college students collaborate with each other and learn biology knowledge. Using these tools, students worked together to find, create, and disseminate information and knowledge related to the course topics. Results showed that the students increased their understanding of certain biology topics as well as critical thinking skills. In order to understand how students collectively organize information in multiple modes and argue about social scientific issues accordingly, Namdar and Shen (2014) documented a study where they developed a science learning unit on nuclear energy for preservice science teachers. The learning unit incorporated a newly developed knowledge building and sharing platform (ikos.miami.edu) that offers three distinctive types of representational modes: pictorial, textual, and concept maps. The study indicated that the group of learners were able to generate a relatively dense knowledge network. Moreover, concept maps and wiki entries were more connected than the pictorial mode. The findings also suggested that students’ knowledge organization and their argumentation practices informed each other in a complex way.

One challenge to incorporate CSCL in college settings is the grading part since in most college classes students are graded individually. How to balance individual accountability and productive collaboration in CSCL still needs more empirical research.

Educational Video and Computer Games

Video and computer games have become a popular entertainment means for people of all ages. Gee (2007) asserted that in game playing, players are learning actively and critically to experience the world in a new way and developing resources for future learning. However, evidence for effectiveness of games for science learning is still contested and science learning with games rarely occurs in college settings (NRC, 2011). One major challenge to adopt gaming in college science education is to make game playing really educative and meaningful. A well-known example is Foldit ( https://fold.it/portal ), an online puzzle video game about protein folding. It takes a citizen science approach that allows users to contribute to actual scientific research related to protein structure and unfolding, which is critical in bioinformatics, molecular biology, and medicine research. The highest scored solutions submitted by players are analyzed by researchers to evaluate their scientific values in solving real world problems. Notable accomplishments through FoldIt playing include deciphering the crystal structure of the Mason-Pfizer monkey virus retroviral protease (Khatib et al., 2011) , and achieving the first crowd-sourced redesign of a protein (Eiben et al., 2012 ) . A similar game is EteRNA (http://eterna.cmu.edu/web/) that enables players to solve puzzles related to the folding of RNA molecules. However, it is still an open question that how these games can be embedded in formal curricula.

OpenCourseWare

With the goal of enhance human learning worldwide through the Internet, OpenCourseWare (OCW) became a popular source for knowledge dissemination for many world’s top universities during the first decade of the 21st century. For instance, a well-known physics series is offered by MIT professor Walter Lewin, including Newtonian Mechanics, Electricity & Magnetism, and Vibration and Waves (http://ocw.mit.edu/courses/). Recently, OCW has evolved into Massive Open Online Courses (MOOCs) , web-based and large-scale free courses that have no restrictions on enrollment (Adamopoulos, 2013; Balfour, 2013) . Overcoming geographic and financial restrictions, a massive number of learners can pursue their individual learning in MOOCs. Popular MOOCs platforms include edX, Coursera, and Udacity.

Hollands and Tirthali (2014) interviewed 83 individuals who were knowledgeable about MOOCs, including administrators, faculty members, researchers and other roles. The authors identified six major goals for MOOCs: (1) extending reach and access (the most stated goal), (2) building and maintaining brand, (3) improving economics, (4) improving educational outcomes, (5) innovation, and (6) research on teaching and learning. The authors suggested that institutions have achieved success to a certain degree regarding these goals except improving economics. Many interviewees agreed that MOOCs can improve educational outcomes. For instance, integrating MOOCs with on-campus courses has shown some signs of success – in this approach students can spent more class time in problem-solving instead of listening to lectures.

A major criticism of MOOCs is that the retention rate is quite low. Only 50–60 % of the students enrolled in an MOOC return after the first course and only about 5 % earned a credential after completing a course (Koller, Ng, Do, & Chen, 2013) . Recent studies have explored students’ engagement patterns and associated causes. Since videos are a central element in all MOOCs , Guo, Kim, and Rubin (2009) examined student engagement with videos. They obtained data from 6.9 million video watching sessions from four edX courses: Intro to CS and Programming (MIT, n = 59,126), Statistics for Public Health (Harvard, n = 30,742), Artificial Intelligence (Berkeley, n = 22,690), and Solid State Chemistry (MIT, n = 15,281). Students’ engagement was assessed in terms of how long they watched the video and whether they attempted to answer post-video assessment problems. Video property was measured by the length, type, presentation style, quality, and speaking rate of instructors. The results showed that shorter videos, videos that combine instructors’ “talking head” with slides, videos where instructors show their personal feeling, and videos with Khan-style drawing (see https://www.khanacademy.org/) are more engaging than longer videos, videos only with slides, videos with high-fidelity studio recordings, and videos with still screencasts. Using the same courses, Kim et al. (2014) investigated within-video engagement behaviors. In order to understand the causes that lead to video interaction peaks that indicate points of interest or confusion within the video, the study combined peak profile analysis (log) with visual content analysis (image similarity metric). The results showed that the interaction peaks can be explained by five student activity patterns: starting from the beginning of a new material, returning to missed content, following a tutorial step, replaying a brief segment, and repeating a non-visual explanation.

Connection to Arts

Efforts have been made to connect science education with art education because arts practices can promote inspiration and interests. Here we highlight a few examples that take advantage of technologies. A common approach is to develop a course or program that integrates arts and sciences. Jennifer Burg at Wake Forest University initiated a project that aimed to develop curricular materials that integrate mathematics, science, computer science, and digital sound production (http://csweb.cs.wfu.edu/~burg/CCLI/Templates/ home.php). The project brought college-level teachers and students from science and art disciplines to carry out, refine, and disseminate the curricular materials. Sciences and arts can support each other for students to learn science concepts. For example, Bopegedera (2005) conducted a light-related program in which students participated in both art workshops and science labs in order to help students to use scientific understanding of light to create artistic products. In the art workshops students could draw and paint products by hand or using graphing software (e.g., constructing light waves with yarn), while in the science labs students could learn concepts related to light (e.g., the relationships among frequency, wavelength, and the speed of light). Another approach to think about linking arts and sciences is to exploit the power of visualization. A good piece of software that can help practicing scientists to create and animate 3-D molecules is Molecular Flipbook (http://molecularflipbook.org). With the powerful visual aid of molecular graphs, scientists can communicate their findings to others aesthetically and informatively (Atwood & Barbour, 2003) . Other creative ways to visualize and disseminate science ideas to the public have also been promoted. For example, Science Magazine hosted a competition named “Dance Your PhD” to encourage college students’ using art to communicate scientific ideas and fuel creative thinking . The 2014 Dance Your PhD was awarded to a UGA plant biology student who danced out how forests regenerate after tornado (UGA Today, 2014). Despite these innovative approaches, however, research on connecting arts and science at the college level still needs much empirical work.

Summary

In this section, we described a few notable examples of technologies that can be used to promote college students’ lifelong learning competencies. We note that a number of important technologies have been left out in this review due to space limit. These may include physical sensors (e.g., Milner-Bolotin & Moll, 2008), virtual or mixed realities (e.g., Cheng & Tsai, 2013) , mobile devices (e.g., Hwang, Yang, Tsai, & Yang, 2009) , and artificial intelligence (e.g., Koedinger & Corbett, 2006) , to name a few. We want to echo the position that it is not just the technology but how the technology is being used that matters (Mazur, 2009; Mishra & Koehler, 2006) . Each individual instructor needs to consider the available resources and student needs to incorporate these technological resources. Table 2 summarizes the relevant features of these technological resources with respect to particular instructional practices.

Table 2 Technological resources that can be used to facilitate promising college science education instructional practices

Technology-Enriched Assessment for Learning

Without appropriate assessments, a pedagogical innovation will be incomplete (Pellegrino, 2013) . Technological advancements have great potential to expand how science assessment can be designed and utilized. In this section, we describe a few assessment approaches that draw heavily on technology to nurture students’ lifelong learning capacity.

Embedded Formative Assessment

Formative assessment has been increasingly used in science instruction. Black and Wiliam’s seminal paper (1998) emphasized on the various ways that formative assessment can be practiced in classrooms and the ways evidence can be gathered to evaluate the effectiveness of the practices. A characteristic that distinguishes formative assessment from summative assessment is that formative assessment is for learning, not of learning (Black, 1993) . Driven by this distinction, formative assessment offers opportunities for students to understand their misconceptions and improve understanding based on timely feedback.

Although formative assessment has great potential to complement instruction and enhance learning, a few prerequisites need to be satisfied for it to benefit students. First, sufficient professional development needs to be provided to teachers for them to fully understand formative assessment strategies and know when and under what circumstances each strategy should be practiced (Furtak et al., 2008) . Second, formative assessment needs to meet quality standards for the assessment to elicit valid information from students. Last, mechanisms need to be developed for teachers to make use of the results from formative assessment. It is not uncommon that assessment results are left sitting on the shelf after a substantial amount of effort has been spent on collecting the results (Ruiz-Primo & Furtak, 2007) . After a decade of research on formative assessment, Bennett (2011) provided a comprehensive review of formative assessment, and called for a more critical view of how formative assessment should be implemented and how its effectiveness should be assessed.

Formative assessment can take a variety of forms. For instance, in a college-level medical science course, Riffat, Quadri, Waseem, Mahmud, and Iqbal (2010) practiced a variety of learning and formative assessment tools such as small group discussion, self-direct learning and quizzes. The authors reported improved critical thinking skills and course understanding through the integrated learning and assessment methods. Lancor (2013) described an approach that used student-generated analogies as a formative assessment tool to elicit students’ ideas about energy in biology, chemistry, and physics. Computer technology provides an efficient way to embed formative assessment in lesson sequences (Liu, Ryoo, Sato, Svihla, & Linn, 2013) . Kibble (2007) reported a program using online quizzes as formative assessment. The study found that the students who participated in the formative quizzes received higher scores on summative assessments and self-reported that the quizzes were useful in providing quality feedback.

A key component of formative assessment is the mechanism of providing informative feedback for students to improve their learning. For instance, the aforementioned peer instruction method (Mazur, 1997; Crouch & Mazur, 2001) is a form of formative assessment. In this method, students receive instant feedback from the automated response distribution of the whole class, from their peers through discussion and argumentation , and from their instructor for clarification and explanation. One constraint of this approach is that students have to attend the class, which they should, to receive the feedback. In contrast, Doige (2012) described an informal, email-based formative assessment program employed to encourage freshmen to constantly revisit their first-year general chemistry materials in a low-stake environment. The students would receive a formative assessment question through email twice a week and, if participating in the program, respond to the question through email. The instructor then would provide timely and personalized feedback to the participating students. The study revealed certain patterns of student participation in this voluntary-based program, and showed that students who participated regularly in the program were more likely to be successful in the summative assessments. One drawback of this approach is that if a large number of students participate in such a program, the responses and feedback for individual students would be extremely time-consuming.

In general, formative assessment should be practiced more frequently in college science classrooms given its potential to provide helpful feedback and improve learning. Formative assessment strategies are particularly needed for large-scale courses including MOOCs as they may be able to help increase student engagement and retention.

Automated Scoring

Automated scoring of constructed-response items is one of the most prominent technologies developed for assessing students’ deep understanding (Bennett & Sebrechts, 1996; Dzikovska, Nielsen, & Brew, 2012; Leacock & Chodorow, 2003; Mitchell, Russell, Broomhead, & Aldridge, 2002; Nielsen, Ward, & Martin, 2008; Sandene, Horkay, Bennett, Braswell, & Oranje, 2005) . Science educators call for the use of constructed-response items in measuring deep understanding and eliciting reasoning (e.g., Lane, 2004; Shepard, 2000) . However, the use of constructed-response items has been limited due to the cumbersome scoring and long turnaround time. Automated scoring, if accurate, can shorten the time between test administration and score report, reduce the number of human raters, and avoid bias typically introduced by human raters (Burstein, Marcu, & Knight, 2003; Liu, Brew, Blackmore, Gerard, Madhok, & Linn, In Press; Williamson, Xi, & Breyer, 2012) .

A number of studies have employed automated scoring to score college students’ responses to science assessments. Attali, Powers, Freedman, Harrison, and Obetz (2008) applied c-rater®, an automated scoring tool developed by the Educational Testing Service for content scoring, to score college-level science items in biology and psychology. The responses to the items were typically 1–3 sentences long. The average kappa indicating the agreement between automated and human scores was.62 for biology and.83 for psychology items. Dzikovska et al. (2012) used the content scoring engine BEETTLE II to score college-level physics and the responses were 1–2 sentences long. The kappa value was around.69 for the items tested.

Nehm and colleagues have applied machine-learning techniques to automatically score college students’ written responses related to evolutionary biology (Ha, Nehm, Urban-Lurain, & Merrill, 2011; Nehm, Ha, & Mayfield, 2011). Nehm et al. (2011) evaluated the scoring performance of the machine-learning software Summarization Integrated Development Environment (SIDE; http://www.cs.cmu.edu/~cprose/SIDE.html) program against that of human experts, using a corpus of 2,260 student explanations on evolutionary change written by 565 college students. The study found that overall the SIDE software performed very well (i.e., kappa > 0.80) and excellent for the natural selection understanding in terms of Key Concept Diversity. Similarly, Ha et al. (2011) applied SIDE to score biology major and nonmajor students’ written responses (number of responses > 1000) related to evolutionary change in introductory biology courses from two institutions. The results indicated that the automated scoring software did perform well in most cases, accurately evaluating students’ understanding of evolutionary change. The authors also identified several common types of students’ responses that led to poor performance of computer scoring. These include responses using many key terms but missing important aspects, responses using key terms that are scattered throughout a response, responses using uncommonly used or complex expression, and responses containing spelling and spacing errors.

Going forward, automated scoring has great potential to facilitate immediate feedback to students’ written responses to open ended items. In a formative assessment setting, if students can receive instant feedback on their answers to a question and be pointed to relevant instructional steps, learning can be facilitated in a much direct and engaging way. Linn et al. (2014) provided empirical evidence that machine-generated automated feedback is as effective as the feedback provided by an expert teacher in terms of prompting students to revisit instruction and revise answer.

Automated scoring and feedback can be particular helpful for large classrooms including MOOCs in which students are unlikely to receive adequate feedback from the instructor given the mass number of students enrolled in these classes. Automated scoring and feedback offer the possibility for these students to receive meaningful and timely feedback, therefore, increasing their engagement and performance.

Learning Analytics

Since science learning involves complex processes such as inquiry, modeling, argumentation, and collaboration , new forms of assessments need to address the dynamic nature of these processes in order to better capture and facilitate student learning (Gobert, Sao Pedro, Raziuddin, & Baker, 2013) . Learning analytics is an emerging method in educational application that focuses on “developing tools and techniques for capturing, storing, and finding patterns in large amounts of electronic data; representing them in generative and useful ways; and integrating them into intelligent tools that personalize and optimize learning environments” (Martin & Sherin, 2013, p. 12) .

There has not been much work conducted in applying learning analytics in college science education . Baker, Hershkovitz, Rossi, Goldstein, and Gowda (2013) presented a supervised method for analyzing student’s moment-by-moment learning over time. In the study, participating students used an intelligent tutoring system for college level genetics called Genetics Cognitive Tutor. The researchers then applied a program to create graphs of student moment-by-moment learning. The graph is based on the probability a student knows a concept or skill at a particular time point (the BKT model, Corbett & Anderson, 1995) and learned the concept or skill at a particular step (e.g, a specific step during a problem-solving process; Baker, Goldstein, and Heffernan 2011) . The study found that these graphs are correlated with different learning outcomes .

Learning analytics has also been applied in understanding students’ engagement patterns in MOOCs. For instance, Kizilcec, Piech and Schneide (2013) proposed a mechanism to identify students’ engagement trajectories in MOOCs based on patterns of learners’ interaction with video lectures and assessments. Using k-means clustering analysis, they classified learners in three computer science MOOCs into four major patterns: auditing, completing, disengaging, and sampling. Based on learners’ self-report, “completing” learners had a significantly better learning experience than the other three groups. They also compared clusters based on learner characteristics and behaviors. They found out two major factors motivated a learner’s enrollment: (a) the course is challenging and (b) the learner is interested in the content of the course.

Apparently, more empirical studies need to be conducted in extracting information from learning analytics to facilitate college students’ science learning. One possible direction is to utilize these fine-grained data to build more informative digital profiles of learners. In this way, students as well as instructors can better reflect on their learning experience and therefore, take appropriate actions to improve learning as needed.

Conclusion

In this chapter we propose that college science education needs to prioritize the goal of developing students’ lifelong learning skills. We reviewed a set of promising pedagogies and new forms of assessments that exploit innovative technologies in college science instruction that can facilitate this goal. We applaud that some of the technology-infused approaches, rare in their kinds, make connections between science and arts instruction at the college level. We stress that it is not about technology per se, but how it is integrated in instruction in different contexts that matters. To make this happen, we need to engineer creative ways to support faculty in using these innovative methods and technologies. A good example of a University-level imitative is the Science Teaching and Learning Fellows through the Carl Wieman Science Education Initiative at University of British Columbia (http://www.cwsei.ubc.ca). We believe that a large-scale implementation of these new forms of technologies, either through a bottom-up or top-down approach, has potential to bring about transformative changes to reach the goal of college science education in the 21st century.