Keywords

1 Background

American higher education is quite different from the European higher education (Theilen 2011; Huisman and van Vught 2009). The American model incorporates a liberal arts core, including of mix of science, history, English, and social science. The first and second years tend to have fewer degree-specific courses. An American chemistry student would take only one chemistry course during his or her first year. In contrast, European chemistry students typically focus on their degree. A European chemistry student would take a series of chemistry courses, such as first-year analytical chemistry, first-year physical chemistry, and first-year organic chemistry. Both continents produce excellent university graduates, but there are significant differences in the degree programs.

A large proportion of American chemical education research has focused on the pedagogy and learning associated with general chemistry. Why? General chemistry serves as a “gate-way” course for the vast majority of science, engineering, and medical degrees at almost all American universities. Few students become scientists, engineers, or doctors without success in general chemistry. As a result, lecture tends to be filled with a large and diverse student population. These students tend to be first-year students, but more advanced students will also enrol in general chemistry. American students have varied interests, career goals, as well as different levels of prior knowledge and skills. The diversity of students increases a teacher’s challenges associated with selecting effective pedagogical methods. General chemistry covers content from first-year physical, inorganic, and analytical chemistry. General chemistry is usually split into two one-semester courses. General chemistry I (CHEM 121 at Ferris State University) covers the first-half of the content. Most students take this course during their fall semester in their first-year of university. General Chemistry II (CHEM 122 at Ferris State University) covers the second-half of the content. Most students take this course during their winter semester in their first year of university.

One confounding variable in American chemical education research is the semester cohort system. Due to the importance of general chemistry, the course is often repeated during an academic year with different cohorts of students. The author has observed anecdotally that fall semester cohorts and spring semester cohorts are not equivalent. Winter semester cohorts have fewer students enrolled in the Honors Program. Many of the winter semester students failed chemistry during the prior fall semester. Many of the winter semester students needed to complete additional mathematics as a prerequisite, which caused them to lag behind their peers. However, the author has observed that winter cohorts appear to have the same expectations for success as fall cohorts.

The author’s pedagogical approach described in this chapter illustrates an observation made by Carl Rogers: “…the curious paradox is that when I accept myself just as I am, then I change (Rogers 1961).” In the author’s experience as a professor, students often do not know that they have a problem until late in a course, and then they often do not know that solutions exist for their problems. Students will generally do whatever they need to do in order to succeed, if they understand what they need to do and the rationale behind these actions.

Why is “self-evaluation” important? A person’s ability to evaluate himself or herself has been related to overall performance level (Dunning et al. 2003). The author’s personal experience has shown that students are particular poor at self-evaluation. Students cannot correct problems that they do not recognize. Instructors are frequently more aware of common problems than students, but the majority of the power to mitigate these problems lies with students.

2 Variables Affecting Student Performance

The literature has investigated many variables proposed to affect student performance. Some of these variables are pedagogical in nature, such as developing new instructional and assessment methods. Other variables can be characterized as more fundamental, such as the effect of multitasking in learning and student performance (Bowman et al. 2010; Zhang 2015; Sana et al. 2013). The author assigns an end-of-semester writing assignment in which current students write a letter to future CHEM 121 students in order to recommend activities to be done and activities to be avoided. Based upon a cursory review of these student letters, students endorse four variables as particularly important: study skills, time management, anxiety management, and attentional control.

Study skills have long been recognized as essential for consistent, high-achieving student performance (Dendato and Diener 1986; Robbins et al. 2004). Robbins and his co-authors performed a meta-analysis in order to identify factors affecting college student outcomes. The authors analyzed 36 studies that examined the relationship between academic-related skills (study skills) and student success and retention. They found that effective study skills have a strong positive relationship with student success and retention.

During fall semester of 2010, the author offered to schedule fifteen minute conferences with his chemistry students. All students were recommended to take advantage of the opportunity, but these conferences were strongly recommended for students earning a “D” or “F” on the first test. Approximately 30 students out of 120 students discussed their progress with the author. Due to ad-hoc nature of information gathering, one cannot draw firm conclusions from these conferences. However, the author observed that struggling students tended to limit their normal out-of-class activities to the required homework activities. They also tended to defer their learning activities until shortly before tests. One failing student was bewildered by the author’s assertion that she needed to spend at least two hours per day to “rescue” her grade. She truly did not know how to spend her time productively beyond a simple review of lecture notes prior to a test. In contrast, the “B” and “A” students employed more learning activities on a regular basis. Their test preparation focused on review of material, not learning material.

In addition to study skills, time management techniques have often been taught at American universities. One research group developed a multidimensional questionnaire in order to assess four factors affecting time management: setting goals and priorities, mechanics of planning and scheduling, perceived control of time, and preference for disorganization (Macan et al. 1990). The study also assessed students’ stress level and grade point average through self-report. Students’ perceived control of time had the greatest correlation between stress, satisfaction, and performance.

Anxiety is an omnipresent factor in life, but it can have positive effects. For example, the author’s youngest son did very little job hunting during his final quarter at the University of Chicago. Why? He felt little anxiety or “inner pressure” to engage in a difficult activity. Once anxiety developed after graduation, he started to look for a job. Test anxiety has a similar effect on students. Too little or too much test anxiety tends to produce sub-optimal results. Dendato and Diener investigated the effects of cognitive/relaxation therapy, study skills training, and combined therapy/training for students suffering from severe test anxiety (Dendato and Diener 1986). Relaxation/cognitive training was found to reduce anxiety but failed to improve test scores. Study skills training failed to reduce anxiety or improve test scores. However, the combined use of relaxation therapy and cognitive skills training significantly reduced anxiety levels and increased test scores.

Attentional control or the ability to focus on tasks appears to be inversely related to multitasking behavior. Sana and others determined that students multitasking on a laptop during lecture scored lower on a test compared with students who did not multitask (Sana et al. 2013). Another study broadened multitasking to students’ overall usage of information and communication technologies (Junco and Cotten 2012). The authors found a negative correlation between students’ use of information and communication technologies and their academic performance.

3 Technologies Used for Teaching and Learning

Modern educational practices have developed an array of technologies to teach and learn chemistry. Some practices have been used since the dawn of chemistry or earlier, such as the Socratic Method or practicals. Other practices became commonplace with the advent of computers and the Internet, such as online homework. Chemical educators must select their tools carefully. They need to consider the desired learning outcomes, as well as the constraints faced by their students. The present work utilized a variety of tools to promote teaching and learning.

3.1 Writing Assignments

Berthoff discusses the value of reflection through the use of “dialectical notebooks,” which are informal tools for the recording of ideas, questions, passages from books, and most importantly, reflections upon them (Berthoff 1987). Both science and science education courses have made extensive use of student writing for purposes of student reflection. For example, Grumbacher found that “learning logs” were an effective tool for high school physics. She stated that her students’ logs improved their problem solving ability, integrated experience and theory, and improved her students’ enjoyment of physics (Grumbacher 1987). Indeed, Byers emphasized reflection as a major task for his chemistry students, in an effort to develop their independent learning skills (Byers 2007).

3.2 Online Homework

Online content-based homework has become a common tool for chemistry teachers. This type of homework encourages students to practice problem solving and answer conceptual questions. Most software packages will provide additional assistance or direct references to a textbook, if a student submits an incorrect answer. Instructors also receive valuable information about their students’ learning, which can guide instruction. The author used a particular online homework package associated with the textbook “Mastering Chemistry” (Pearson Higher Education 2013). The web-based software collects a range of information: points earned by students, the time spent on assignments, and other information, such as commonly submitted wrong answers. All of this information can be used by instructors for assessment purposes.

3.3 Classroom Polling Devices

The use of classroom polling devices or clickers has been extensively reported in the literature. MacArthur and Jones reviewed 92 reports applicable to the use of clickers in chemistry classrooms. The devices permit rapid collection of student answers to questions posed in lecture for formative evaluation. Clickers have also been used to collect student answers for quizzes and tests (MacArthur and Jones 2008). Researchers have used clickers to collect survey data: communications (Bunz 2005) and psychology (Langley et al. 2007). They directly compared clickers to other standard methods used for surveys and determined that the device was a suitable alternative.

3.4 Course Percentage/Grades

Course percentage and grades provide a global measure of student performance in chemistry courses, including laboratory and homework. Many variables affect grades. Their relative influence can change from professor to professor and semester to semester. This limits the utility of course grades as assessment instruments. Nonetheless, the author uses course grades as one measure of students’ performance.

3.5 Tests

In contrast to course grades, standardized tests measure students’ performance in a limited set of concepts and skills, but do so in a fashion that permits comparison across institutions, professors, and semesters. This project uses two standardized tests to assess student performance: California Chemistry Diagnostic Test serves as a pretest (University of California – Berkeley 2006) and First Term General Chemistry test serves as a post-test.

In summary, chemical educators have long been concerned with student learning. The chemical education community has invested considerable resources into the development of effective instructional activities and technologies, such as guided inquiry methods and online homework. The education community has also developed many qualitative and quantitative instruments in order to measure student learning. Figure 7.1 illustrates effective student assessment as a triangular model (Carnegie Mellon University 2015). Although “learning objectives” must be student-centric, the Carnegie Mellon University model focuses more upon teaching and assessment activities, which are teacher-centric. The author would like to propose an alternative model for learning, which is more student-centric (Fig. 7.2). Students are the individuals with the greatest capacity to make improvements in their learning processes. They enter their course of study with expectations of success. In the author’s experience, most students will exert themselves in their studies only as needed.

Fig. 7.1
figure 1

Relationships (Alignments) between learning objectives, instructional activities and assessments (Carnegie Mellon University 2015)

Fig. 7.2
figure 2

“Closing the Circle” in teaching and learning by expanding the relationships between teacher-centric activities and student-centric activities

Unfortunately, a student’s perception of his or her success can be quite inaccurate until too much time has passed, resulting in the failure of multiple chemistry tests. If students received sufficient accurate information about their performance at the beginning of a particular course or degree program, they will recognize their strengths and weaknesses, and then address them. Suppose a student recognizes that she lacks adequate time management skills due to a self-assessment worksheet (College of Retention and Student Success, Ferris State University Seminar). As a result, she would likely suffer a lower grade due to inadequate time management; then this student would probably become interested in learning about effective time management strategies.

Learning is an inherently individual process. Ann Berthoff stated succinctly the nature of thinking, which is an essential process in learning: “Thinking begins with perception: all knowledge is mediated (Berthoff 1987).” New knowledge is constructed from a base of old knowledge and perceptions. As a student acquires new information, he or she needs to integrate it into the framework, modifying the framework as needed. This process is inherently reflective. A student can superficially engage in the educational process (i.e., come to lecture, complete homework questions, and read the textbook – maybe even ask questions) and still not achieve satisfactory outcomes.

4 Methods

The author taught CHEM 121 students in the spring of 2014 at Ferris State University (Michigan, USA). He found the course to be quite challenging. 35.8% of the students did not pass the course. He discussed this result with his colleagues, who told him that this was typical for the spring semester course. The author defines “failing” as earning a D, F or W (Withdrew from the course). The fall semester 2013 CHEM 121 course had a very different result: only 11.8% of the students did not pass the course. The author thought that there were interventions that could be employed to lower the failure rate for spring semester courses of CHEM 121 to make the failing rate comparable to fall semester courses of CHEM 121.

The foundation for the methods described in the following paragraphs resides in a specific student assessment activity. Student grades from all spring semester courses in CHEM 121 from 2007 to 2014 (N = 936) are summarized in Fig. 7.3. Four different instructors taught during this time period, who used very different pedagogies. 27.5% of all of the students earned a D, F, or W, which will hereafter be labelled as “DFW grades.” The percentage of DFW grades assigned in a particular semester ranged from 17.5% to 35.8%. Based upon historical evidence, one could reasonably expect that a little more than a quarter of future cohorts of students would fail to earn an adequate grade for their degree program (C or better).

Fig. 7.3
figure 3

The grade distribution for students enrolled in the spring semester sessions of General Chemistry I (CHEM 121) from 2007 through 2014 (N = 936)

The author has taught CHEM 121 several times at Ferris State University, but three cohorts seemed most suited for comparison: spring semester 2014, fall semester 2014, and spring semester 2015. The pedagogical methods described hereafter were implemented during the spring semester of 2015. Spring semester 2014 and fall semester 2014 cohorts served as benchmarks for assessing the spring semester 2015 cohort.

Ferris State University attempts to prepare its students for the rigors of university learning. All first-year students are required to take a freshman seminar: Ferris State University Seminar (FSUS). FSUS addresses issues related to learning, such as time management and study skills. Unfortunately, many disparaging comments were heard from first-semester chemistry students, indicating that many of them are not receptive to changing their habits. It would not be surprising if a sizable percentage of the second-semester CHEM 121 students believed that time management and study skills were not important during their first semester.

The author decided to improve his students’ receptivity to learning new ideas. He implemented his pedagogical plan during spring semester CHEM 121 course in 2014 (N = 93 students). The students completed their first 3-h practical in a classroom instead of a laboratory. There were four groups of 24 or fewer students, which permitted the use a workshop model. Each session started with a frank discussion about past student performance. The students were shown the historical trends in student grades for spring semester CHEM 121 courses. The lecturer conveyed the gravity of the students’ situation by telling them “Based upon prior semesters, 27.5% of you have already ‘washed out’ of CHEM 121. Look around the room. Four or five of you have already failed.” Almost every student would look shocked. The “prediction” was followed with a very simple statement “We can do better.”

The workshop portion started after this short presentation. Each student completed the “Procrastination Quotient” worksheet (College of Retention and Student Success, Ferris State University Seminar), which was not collected by me. I presented FSUS material for time management, and then students were required to schedule time for studying. Subsequently, FSUS material related to study skills (College of Retention and Student Success, Ferris State University Seminar Program) was presented. The learning outcome for the workshop can be summarized in the following terms: (1) encourage students to accept themselves as they are, and then (2) provide students with the tools needed to change.

During the semester, the author used a variety of formative and summative assessment instruments to provide students with the information that they need to improve their self-awareness. The first week of CHEM 121 was used to administer two assessment instruments: (1) a standardized chemistry pre-test (University of California – Berkeley 2006), and (2) a “multitasking assessment” instrument, which was based on an instrument provided by FSUS. Too many of the spring semester 2014 students engaged in multi tasking with their laptop or mobile device during lecture. The activity required students to measure the time required to complete a simple task while focusing solely upon the task, and then repeating the task while multitasking in a manner similar to driving a car and using a mobile to text a friend. After collecting the multitasking data from the spring 2015 students, the data from the fall 2014 cohort were displayed for their reflection (Fig. 7.4). Multitasking slows down task completion: 15.9 seconds for a simple task. Similarly, texting or chatting during lecture impedes a student’s ability to learn chemistry. As a result, multitasking has negative consequences for time management and learning.

Fig. 7.4
figure 4

The experimental results of focused tasking compared with multitasking for the fall 2014 cohort. There is a significant difference in the average task time required by students (p = 0.05)

Classroom polling devices, online homework (Master Chemistry), and semester tests for instructional and assessment purposes were also used. Students’ data were summarized in tables or charts, and then the summarized results were shared with the class during lecture. Students were encouraged to compare their individual performance with peers and were also provided suggestions for improving learning and testing. For example, students’ answers on their first weekly quiz were collected using a classroom polling device (clicker). At the same time, students were asked to self-report the time spent on studying chemistry. The data were summarized and discussed at the next lecture (Table 7.1). Finally, the First Term General Chemistry was administered as the course’s final examination, which is commonly used as an assessment tool (American Chemical Society, Division of Chemical Education, Examination Institute 2005).

Table 7.1 Summary of student scores on the first weekly quiz: Overall class average (78.9%), as well as the average student scores based upon the self-reported time spent on studying chemistry

Writing assignments provide students with an opportunity to engage in guided reflection. This tool was used at different stages during a semester. Students were assigned two assignments: (1) read and answer a set of questions derived from the course’s syllabus, and (2) read a set of anonymous “student letters” from prior semesters of this course at the beginning of the semester, and write a list of behaviors that they should or should not do. Ferris State University’s course website (Blackboard 9.x) was used to collect and grade student writing assignments. Students were expected to understand the author’s expectation for them, as described in the syllabus. The author also wanted students to learn the best practices for students as recommended by students.

5 Results

Advances in education frequently need baseline data for comparison. As previously stated, three cohorts seemed most suited for comparison: spring semester 2014, fall semester 2014, and spring semester 2015. The pedagogical methods previously described were implemented during the spring semester of 2015. Spring semester 2014 and fall semester 2014 cohorts served as benchmarks for assessing the spring semester 2015 cohort because the two cohorts shared many assessment instruments and learning activities with the spring 2015 cohort.

Two standardized tests prepared by the American Chemical Society were administered to serve as a pre-test and a post-test in order to assess student performance. The California Chemistry Diagnostic Test served as a pre-test over three semesters: Spring 2014, Fall 2014, and Spring 2015 (Table 7.2). Similarly, the First Term General Chemistry served as the post-test over the same semesters (Table 7.3). The students’ raw score was converted into a percentile rank using the tests’ normative data. The different cohort’s results are summarized in Table 7.3.

Table 7.2 Descriptive statistics for the California Chemistry Diagnostic Test using normative data. The groups had no significant difference with respect to ANOVA (p > 0.05)
Table 7.3 Descriptive statistics for the First Term General Chemistry using normative data. All of the groups had significant differences with respect to ANOVA (p < 0.05)

The cohorts are fairly matched, based upon the analysis of variance (ANOVA) of the students’ pre-tests. The chemistry content covered each semester is consistent, but the learning activities vary. As a result, it is not surprising that student performance varies across the semesters based upon ANOVA. Every student received an overall percentage based upon their performance on different activities and assessments: practicals, quizzes, homework, writing assignments, class participation, semester tests, and the post-test (Table 7.4). Despite differences in the semesters’ results for the post-test, there was no significant difference in the overall percentage which correlate to students’ final grades (A, B, C, D, or F). This result was found to be surprising because a semester was not finished with a predetermined course average in mind. In addition, some variation occurs between semesters with regards to the basis for calculating the overall percentage. For example, semester tests were provided 45% of the overall percentage in spring 2014, but 50% in fall 2015.

Table 7.4 Descriptive statistics for the Overall Course Percentages for each semester. The groups had no significant difference with respect to ANOVA (p = 0.05)

Learning activities varied across semesters, especially writing activities and practicals. Weekly quizzes were an important part of the course because they supplied evaluation data to the instructor; they provided 10% of the overall course grade. It was also hoped that students would also use their personal results as a measure of their performance. The points varied across semesters, but the same technology was used: classroom polling devices. The quizzes were administered at the beginning of class, and then the content was reviewed immediately. There is no significant difference in student performance based upon weekly quizzes, which are a measure of their efforts to stay up with the content covered in lecture (Table 7.5). However, the first three weeks of lecture show a very different result (Table 7.6). The spring 2014 cohort’s performance was much poorer when compared to the other two cohorts. All of the cohorts diminished in number between the first semester examination and the end of the semester, but the spring 2014 cohort lost the greatest number.

Table 7.5 Descriptive statistics for the Overall Weekly Quiz scores for each semester. The groups had no significant difference with respect to ANOVA (p = 0.05)
Table 7.6 Descriptive statistics for the Overall Weekly Quiz scores for the period prior to the first semester test. Spring 2014 was significantly different from the other two semesters with respect to ANOVA (p = 0.05)

Mastering Chemistry homework shows a similar result to the weekly quizzes. All of the cohorts diminished in number between the first semester examination and the end of the semester, but the spring 2014 cohort lost the greatest number. There are no significant differences in cohorts’ average homework scores at the end of the semester (Table 7.7). In contrast, the first three weeks of work shows considerable difference between the cohorts (Table 7.8). The spring 2014 lagged behind the other cohorts with their completion of Mastering Chemistry, which provided online practice of content and problem-solving skills. Surprisingly, the spring 2015 cohort far exceeded the performance of the fall 2014 cohort. 48% of the fall 2014 cohort consisted of students enrolled in the Honors Program. These students were pursuing pharmacy or other medical degrees, and they are typically the best prepared and motivated students at Ferris State University. The author did not determine how many of the spring 2015 cohort were enrolled in the Honors Program, but he doubts if the percentage approached the number for fall cohort. It should be note that the author has had a long practice of offering his students both bonus points and penalty for online homework. Online homework provided 10% of the overall course grade. The penalty occurs when a student misses a deadline for homework submission (20% deduction). The bonus occurs when a student fulfils a deadline with a score of 90% or better. A few bonus points are also offered for the “orientation” assignment to encourage students to begin their online homework.

Table 7.7 Descriptive statistics for the Overall Mastering Chemistry scores for each semester. The groups had no significant difference with respect to ANOVA (p = 0.05)
Table 7.8 Descriptive statistics for the Overall Mastering Chemistry scores for the period prior to the first semester test. All of the cohorts were significantly different from with respect to ANOVA (p = 0.05)

When the spring 2015 cohort began the semester, as described above, the author shocked the students with a very challenging statistic. Historically, 27.5% of students enrolled in the spring CHEM 121course receive a D, F or W grade. The author challenged the students to change this situation. He also provided them with methods to learn chemistry, as well as skills needed to learn, such as time management and focusing on a single task at a time. He maintained pressure on his students by providing timely assessment data, such as weekly quiz scores, homework scores, and periodic tests. He had originally planned to formally require his students to writing assignments to encourage reflection, but he was unable to do this. Never-the-less, the spring 2015 cohort dramatically decreased the percentage of F and W grades (Fig. 7.5). The cohort’s overall percentage of DFW grades was 16.1%.

Fig. 7.5
figure 5

Historical trends in the final course grades in spring CHEM 121 courses (2007 through 2014) in comparison with final course grades for spring 2015

6 Discussion

When the author submitted his grades for CHEM 121 for the spring 2014, he found that many students were going to need to retake CHEM 121 or change their degree program: 35.8% of his students received a D, F, or W grade. This outcome was not unusual for the spring semester course, but the author found it to be unacceptable. He believes that very few students want to fail chemistry, but some students do not have the skills, attitudes or even self-awareness needed for success. “Student-centred” learning requires students to learn, which can be a problem for poorly prepared students.

As a direct result of his experiences with students, the author developed a pedagogical approach that would attempt to address students’ shortcomings through self-awareness. He collected and analysed student assessment data and shared this data in order to enlighten his students. He also provided them with tools for success: time management and study skills. Finally, he encouraged his students’ to reflect upon their learning. These elements are essential for learning, but are frequently underdeveloped in the poorest performing students.

He assessed students that he taught over the span of three semesters. He started with very similar cohorts with respect to their prior knowledge and skills. Indeed, the analysis of variance (ANOVA) results of the California Chemistry Diagnostic Test indicates that there were not significant differences between the three cohorts with p > 0.05 level. Likewise, the three cohorts had statistically similar overall course percentages. Based upon these two measures, one could conclude that the three groups are very similar.

For the spring 2015 cohort, the author used instructional time to collect and/or present assessment data, as well as teaching time management and study skills. The students worked harder as measured by Mastering Chemistry and weekly quiz scores. The improved scores directly raised student grades, permitting a larger percentage of students to pass chemistry. The author had assumed that greater effort by students would also positively affect their acquisition of content knowledge and skills. However, the ANOVA results for First Term General Chemistry indicate that the cohort for spring semester 2015 did not achieve the same performance level as the other two semesters. This is a curious result that may be explained by a larger proportion of poorer performing students remaining in the course to take the post-test First Term General Chemistry assessment. The author’s assessment of the three cohorts indicates that student success is much less dependent upon prior chemistry knowledge and skills than their attitude, learning skills, and personal management skills. Interventions that target these student attributes may increase student success more than a strict focus on content delivery.