Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Introduction

Teaching and learning are not always seen as two distinct processes. Often it is assumed that if something is taught, then the students will learn. If students don’t learn, then the first response from instructors is often that students are not working, not working hard enough or not academically prepared for the current course. Sedlak (1987) referred to this problem as the teacher viewing his/her responsibility ending with the presentation of material and the student’s responsibility beginning with learning what the teacher presented.

The teacher’s opinion on how a subject should be taught is often accepted by most people because the teacher, as content specialist, is credited with knowing what should be taught and how it should be taught. This is especially true of university teachers who are doing research in the same field they are teaching. A similar assumption is often made with doctors. Here we expect that what the doctor says is accurate and true. We don’t often question the doctor’s opinion. When we are seriously ill, we might seek out a doctor who is a specialist and doing research on the disease we are suffering from. We believe this specialist will know more than our regular doctor because the specialist is involved in research on the problem and must therefore be an expert. This model of “professional as expert” is prevalent in both medicine and academia. The result of this thinking can be misinterpreted in terms of the teacher will know how to teach the material and if the students don’t learn, it must be the students’ fault. The main problem with this teacher–doctor analogy is that the doctor operates in a one-to-one relationship with the patient and uses both interactions with the patient and tests that supply the doctor with pertinent data to help analyze the problem and prescribe a treatment. In the classroom, the teacher is working with a one-to-thirty or one-to-two hundred or more relationship with the students and there are few if any two-way interactions with the vast majority of the students. Therefore, the analysis by a teacher that a student is not learning may not be as accurate as a diagnosis made by the doctor. There is a need for more teacher–student interaction and open-ended assessments that would provide both additional and more targeted information on the problems that the student is experiencing. Open-ended assessments can involve students in examining their own learning and helping focus their attention on understanding, rather than only achieving correct answers. This approach has been recommended by teaching experts (Shepherd 2000).

Student learning is a complex process that may not be totally evident to the teacher or the student. The student should be encouraged by the teacher and taught how to monitor his/her own learning. This self awareness of the learning process (metacognition) should be promoted by the student’s interaction with the teacher and teaching process. Metacognition is a process of the student engaging in “making sense” of new information, self-assessment, and self-reflection on what worked or didn’t work in understanding new information (Bransford et al. 2000). The learning environment can be developed by the teacher to include opportunities for the student to engage in metacognitive activities. These opportunities include self-assessment (such as ConcepTest questions), reflection, and opportunities to explain the logic of the answer in addition to providing an answer.

In learning theories, emphasis has been placed on how the process of teaching can affect learning in the individual student. The Constructivist theory places the process of learning inside the mind of the individual and requires the learner’s active participation in the learning process. The Information Processing movement has helped identify brain-based parameters of how information is entered into working memory, moved to long-term memory and interwoven in students’ mental schemas. An often cited Information Processing theory of memory and learning is Baddeley’s model of working memory (1986) which emphasizes the fact that the sooner information can be rehearsed will affect how well it is kept in working memory. The implications for teaching are that the teaching process should provide opportunities for students to rehearse material during the learning process. Once information enters the working memory, it must be integrated with and transferred to long-term memory if it is to be retained. Long-term memory represents our store of knowledge and is accessed when new information enters the working memory (Pellegrino et al. 2001). This Information Process model of learning has been broadened to include cognitive neuroscience models of learning. Cognitive neuroscience models include the idea of schemas as ways of organizing information in long-term memory. Schemas are constantly changing as new information is received and integrated with that already in long-term memory. The formation and restructuring of schemas enable individuals to develop a mental model that guides their future learning (Pellegrino et al. 2001). Teaching based on the mental model of schema formation and revision includes the learner in the learning process. As a result, teaching has to go beyond traditional lecturing and content assessment and include opportunities for students to rehearse, process, and self-assess their knowledge.

In order to integrate the information processing-neuroscience model of learning with lecturing, traditional lectures should be examined for opportunities for students to incorporate new knowledge in working memory with that already held in long-term memory schemas. Pedagogical techniques that help provide such opportunities include the use of interactive questioning in class; formal student reflection on assumptions and conclusions in lab reports; use of technological resources such as course websites that provide access to support materials online; group work using guided inquiry materials; assessment opportunities that emphasize the reasons for an answer and not just the answer; and the use of peer leaders in discussion groups. There is still a place for lecturing in this new view of teaching and learning but it becomes one of several processes available to students for learning, not the only process.

Teaching is a cultural ritual (Nuthall 2005) that was invented by humans to help humans learn. Myths about teaching and learning can be viewed as cultural artifacts of the traditional approaches. The myth, discussed earlier, that if the teacher presents the information, the student will learn unless he/she is not working or not working hard enough can now be viewed as too narrow an explanation. This ritual of what constitutes good teaching is not based upon research but rather is based upon our collective experience of teaching and learning. In this experience, we “know” what constitutes good teaching. It occurs when the teacher exhibits the “customary” teaching behavior which up to recent times in science has been primarily lecturing. This is the teaching format that most of us experienced in our academic careers. In this model of teaching, successful learning is often viewed as students being able to repeat on tests and quizzes what the teacher has said during lecture. It is the customary way of doing things and thus constitutes our teaching and learning culture. This culture is promoted by books that offer advice to new teachers (McKeachie 2002; Bligh 2000) however, these “good” teaching practices can be part of the culture promoting the status quo. The presence of an entrenched teaching culture is difficult to confront in teacher training programs because most pre service teachers have primarily experienced lecturing as the way science is taught. Lecture is what pre-service or new teachers experience most often. It is familiar and thus easy to emulate. The same is true for science graduate students who do not typically take education courses but go on to become our new college professors. They experience the culture of teaching through the use of lecture in both their own university courses and in many college departments where they are hired and thus they strive to emulate it as new professors.

Can research challenge the current culture of teaching and learning? The answer depends on what questions or myths the research chooses to challenge and how effective it is in testing them. Research that asks the easy questions and uses standard research methodologies and previously established tools may promote rather than challenge the current teaching culture. In order to test whether the teaching/learning culture is valid and the myths that exist are real, the research should be theory-based and tailored specifically to the question or myth being explored. This means that research methodologies may be more unorthodox and tools used may need to be designed, validated, and tested for reliability for use in each specific research project. This approach should result in the research being more difficult to configure but more successful in challenging the current culture of teaching and learning.

Specific Myths About Teaching and Learning

How Long Can Students Pay Attention in Lecture?

A myth in relation to lecture that served to explain why more learning does not occur during lecture is that students pay attention for less than the length of a standard 50- or 55-min lecture. Typical attention spans of 10–20 min (Sousa 2006) to 30 min (Bligh 2000) are proposed in teaching books. To address this problem, McKeachie (2002) suggests, as does Sousa, that passive lectures be broken into shorter segments interspersed with teacher–student interaction. Research on the topic has been scarce. Johnstone and Percival (1976) published one of the few studies that measured student attention in lecture directly through observations of individual students in a class. Their conclusion was that the amount of time that students pay attention during lectures is cyclic and the length of each cycle depends on where it occurs in the lecture.

A recent experiment (Bunce et al. 2010) showed a significant difference in the attention students reported as the lecture proceeded. This research utilized an atypical research methodology that took advantage of the personal response device (clicker) as both an interactive learning device and a research instrument. Students in three different general chemistry classes (engineering, nursing and nonscience majors) participated in the study. Students used clickers during lecture to individually record when they were aware that their attention had lapsed. Students were instructed to press “1” if the lapse was for a minute or less, “2” for 2–3 min lapses, and “3” for a lapse of 5 min or more. This research design utilizing clickers as both a research tool as well as a teaching tool allowed for the daily collection of data in the three different courses for 4 weeks with minimal disruption to teachers or students. While Johnstone and Percival’s (1976) data consisted of researchers recording attention by observing student facial expressions in lecture, the clicker data relied on the participants recording their own attention lapses.

The data show that after a short initial period of settling down at the beginning of the lecture, students pay attention for about 4.5–5.0 min and then again in 2–3 min cycles interspersed with lapses in their attention. Most of the self-reported attention lapses in this research were of short duration (usually 1 min or less). Most interestingly, the experiment included a comparison of lecture segments vs. segments when interactive questioning via personal response systems (clickers) or chemical demonstrations of comparable length were used. The data show that there were significantly fewer lapses in attention reported by students both during interactive questioning via clickers and chemical demonstrations compared to those reported during lecture. There was no significant difference in the lapses of attention reported by students between the two nonlecture segments (clicker and chemical demonstration). Both clicker and chemical demonstrations appear to engage student attention equally well. An added benefit of using either clickers or chemical demonstrations is that there is a decrease in attention lapses during a comparable length lecture segment following the clicker or demonstration segment compared to the number of attention lapses reported in the lecture segment before the clicker or demonstration.

This research demonstrates that student attention cycles between attention and inattention in ever shortening cycles as a lecture progresses. It also shows that using interactive segments such as clicker questions or chemical demonstrations can have both an immediate and a delayed benefit on student attention. This information can be used by teachers to structure their lectures into smaller segments interspersed with interactive segments such as clicker questions and/or demonstrations to maximize student's attention.

Is the Use of Clicker Questions More Effective than Frequent Online Quizzes?

Myths can occur about the teaching and learning of newer pedagogical approaches to learning as well to traditional approaches. One myth concerning the use of personal response devices (clickers) in lecture is that the use of clickers will automatically result in increased student achievement. Clicker use is becoming widespread in secondary and college classrooms and is estimated to be used by approximately 8 million users (Interactive Clickers 2006). The main use of clickers is as a means of implementing ConcepTests (Mazur 1997) in lecture. ConcepTests are conceptual questions that can be asked during a lecture. Students respond by choosing an answer. The clicker device software analyzes the student data in real time and constructs a graph providing teacher and students with a visual representation of how many students have chosen each answer to questions asked using clickers. With this information the teacher can decide whether the material just presented has been learned by the students and then the teacher can proceed with the next concept or recognize that the concept is not well understood by students and review or re-teach the original concept. It seems logical that such an interactive pedagogy would result in both increased student learning and more student-centered and responsive teaching. However, most research on this topic has not dealt directly with the effect of clickers on student learning. Most studies (MacArthur and Jones 2008) measure student attitude or engagement when using clickers. The effect of using clickers as a way of delivering ConcepTests and their effect on student learning has not been directly studied.

Bunce et al. (2006) investigated the relative impact of ConcepTests and clickers versus the impact of online daily quizzes on student achievement in general chemistry. Both the in-class clicker questions and the online daily quiz questions were keyed to questions on regularly scheduled hour exams. Student scores on corresponding questions on the end of the year standardized exam were also examined. The results showed that students scored significantly higher on regularly scheduled hour exams if they had a corresponding online quiz question on that topic compared to a corresponding ConcepTest clicker question. There was no significant difference between the effect of ConcepTest clicker questions and the online quiz questions on student achievement on the standardized final exam.

The explanation for this may have more to do with the implementation of the two pedagogies than the actual pedagogy itself. In this experiment, the ConcepTest clicker questions were used in class but students did not have access to either the question or answer after class. By contrast, the online quizzes with the correct answers were available on demand to students throughout the semester. A questionnaire helped document student study behavior in preparation for the regularly scheduled tests in terms of their review of online quiz questions. Students reported relying on reviewing the online quizzes in preparation for the regularly scheduled tests but did not review the ConcepTest clicker questions because they did not have access to them. Although students did significantly better on questions that had an online quiz counterpart compared to questions that had a ConcepTest clicker question on the hourly teacher-written tests, there was no significant difference between the two on the standardized final exam.

The seemingly small difference in implementation between having semester-long access to either the ConcepTest questions and the online quiz questions might have been expected to have inconsequential effects. The reality is that if both a teaching/learning tool and its implementation are in line with how the learner uses them, then there can be significant effects on student learning. Using just the tool may not be enough to affect change. The implementation of that tool in line with how students use it can be equally important. In this case, it was hypothesized that the ability to review and reflect on the content of the online quiz questions was responsible for the increase in student achievement on corresponding questions on the regularly scheduled tests. The inability of students to review and reflect on ConcepTest clicker questions is thought to have interfered with gains in student achievement on corresponding questions on the regularly scheduled tests. It is not known if students reviewed the online quiz question in preparation for the standardized final exam but it is conceivable that due to time constraints in preparing for a final exam, this was not a well used resource. In answer to the original question about whether the use of clickers increases student achievement, the result of this research is that the way the tool is implemented must be congruent with how students use it. If not, no tool, no matter how theoretically sound is likely to result in significant change in student achievement.

Can Students Successfully Answer Essay Questions in Chemistry?

In chemistry, like other sciences, success is often measured by the number of problems a student can solve. In chemistry, these questions are meant to be applications of chemistry concepts. Often chemistry tests include a high percentage of multiple choice questions that test the recall of concepts or facts necessary to solve open ended application problems which may also appear on the test. Some educators have questioned whether these numerical application problems adequately measure student conceptual chemistry understanding (Moore 1997). Meaningful learning as defined by Ausubel (Novak and Gowin 1984) might be better tested through essay-type questions that ask students to construct a logical argument to address a conceptually-based application question. The use of essay-type questions has several drawbacks in large introductory college chemistry courses. These drawbacks include the following: students do not typically do well on such essay-type questions; the grading of such questions is labor intensive; and the teaching assistants that typically help grade chemistry tests in large classes may not be able to reliably judge how well students are addressing the question asked. The latter two problems can be addressed through course management and in-service teaching assistant training programs, but the first problem, that of students typically not doing well on essay questions, is worthy of further exploration.

From a cognitive point of view, there are two aspects to the use of essay-type questions on assessments that should be addressed, namely, the number and level of difficulty of the chemistry concepts being tested and the way the question is phrased. Both of these variables affect the cognitive demand the question places on the learner. According to Cognitive Load Theory (Sweller 1994), the concept being assessed is an intrinsic variable and the way the assessment is designed or phrased is an extrinsic variable. Both intrinsic and extrinsic variables must be taken into consideration in the assessment process. Intrinsic variables such as chemistry concepts include all the previous knowledge that the current concept is based on. A question that seemingly addresses a single chemical concept may require the learner to understand three to five previously learned chemistry concepts that support the understanding of the current concept. The number of prerequisite chemistry concepts involved directly affects the level of cognitive demand of the question being asked. The cognitive demand of the extrinsic variable is affected by how the question is asked. If the information needed to solve the problem is presented directly in the stem of the problem, then the cognitive demand is lower. If the student is expected to provide an answer using assumptions that are not explicitly included in the problem as stated or must be derived from information that is given in the problem, the cognitive demand is increased.

The net result of Cognitive Load Theory on essay questions is that the cognitive demand of a chemistry question can quickly exceed the ability of many students to successfully solve it. But the transition of students from novice to expert understanding can be hampered if students are not taught how to address essay-type question logic. The use of essay questions helps students organize the information in their long-term memory (Wandersee et al. 1994) and thus increases the likelihood that the information will be more accessible to the students in other situations. Russell’s research (2004) has shown that students need multiple exposures to both critiquing and writing essay questions in order to develop the skill of producing persuasive and logical arguments. Before students can adequately address essay-type questions, it is important to determine if they can recognize a complete and cogent (logical) answer to such questions.

Bunce and VandenPlas (2006) explored this question with undergraduate students enrolled in a nonscience majors’ course. Students were shown sample essay questions and possible answers online 24 h prior to taking an exam on the same material. Students were then asked to analyze the answers provided to these essay questions for completeness and cogency using a Likert scale. In some cases, students were also asked to either plan or construct answers to other chemistry essay questions. The data show that in most cases students were able to correctly identify the correct essay question answer but their overall ability to rate the completeness and cogency of these answers on a Likert scale was moderate (2.7 and 2.9 out of 5, respectively). When one of the questions students were asked to examine was then included on a test within 24 h, there was no significant correlation between the students’ ability to recognize a complete and cogent answer previous to the test and their ability to create a complete and cogent answer on the test.

These results suggest that being able to recognize correct answers to essay questions is not enough. Students need more intensive training in constructing and analyzing answers to essay questions for correctness, completeness and cogency. Essay questions of differing cognitive demand should be included in this training. Students’ ability to answer essay questions may have more to do with students’ understanding of what is expected of them in terms of completeness and cogency than it does in terms of understanding of the chemistry concepts. These variables should be addressed separately in the teaching experience. It may not be that students cannot answer essay questions. It may be more a question that they don’t realize what is expected of them in order to answer such questions to a satisfactory level determined by their teachers. Being able to correctly and adequately address essay questions is further complicated by the different cognitive demand that essay questions can make on a student. All of these variables should be considered when including essay questions as part of a student assessment. The issue is more complicated than may be initially apparent.

How Long do Students Retain Their Chemical Knowledge After a Testing Situation?

Another myth that exists among teachers is that students forget what they have learned soon after completing a test. This phenomenon is seen at both the secondary and undergraduate levels. Wheeler et al. (2003) report that undergraduates experience an extinction of knowledge within 48 h of a testing situation. Anderson’s et al. research (2004) modifies this assertion by reporting that students will regain some of this knowledge if multiple tests are given following the initial test. Regaining lost knowledge is seen as a result of students using multiple testing occasions to help them develop a stronger conceptual schema. Stronger schemas result in increased long-term retention according to Anderson et al. Hockley (1992) makes a distinction between forgetting discrete pieces of information vs. forgetting information that is associative. According to Hockley (1992), associative information is forgotten less quickly than discrete information. Anderson et al. (2004) explains this in terms of associative information having more retrieval paths from long-term memory than discrete information.

Curricula and teaching methods can affect whether chemistry is taught as a list of discrete facts or as an associated body of knowledge. Curricular that is organized using a spiral approach present a limited amount of information and then revisit and build upon it as new topics are introduced. This can be done either explicitly by including outlines of topics presented and revisited in the table of contents or implicitly in the organization of material from one chapter to the next. Teaching too, can either emphasize connections between concepts or simply present each new concept as it occurs in the syllabus. Students will address the learning of concepts in a manner that is consistent with the way the concepts are presented by the teacher and/or textbook (linear or spiral, discrete, or associative). In keeping with the theory of memory that includes the formation and utilization of schema, how the concepts are taught may affect how long students remember the information following a testing occasion.

Bunce et al. (2009) looked at the decay of student learning in three courses including both undergraduate- and secondary-level general chemistry courses (undergraduate nursing and nonscience major general chemistry and a secondary level honors general chemistry course). The curricula and teaching methods used in these courses included both explicit and implicit spiral curricula and teaching methods. Follow-up tests were used at different time intervals following a regularly scheduled testing occasion. The follow-up tests included two questions that appeared on the scheduled exam. Students did not have access to either answer keys or discussion of the test questions on the exam until all follow up tests were completed. Students cycled through the short-, medium-, and long-term delayed follow-up tests for the three scheduled tests used in this study. This meant that each student who participated answered two questions that appeared on one of three scheduled exams either after a short, medium, or long delay. The regularly scheduled exams were completed as paper and pencil tests. The delayed follow-up tests were completed either online or as paper and pencil tests as decided by the teacher of the course.

The results show that students (undergraduate nursing general chemistry and secondary level honors general chemistry) who experienced an explicit spiral curriculum where the chemistry was presented in an overall associative framework, showed no significant decay in their knowledge from the scheduled testing occasion through the long-term delay (10–17 days). Students, who experienced an implicit spiral curriculum where the associative framework of the chemistry concepts was not explicit (undergraduate nonscience major), experienced a significant decay in knowledge between the scheduled testing occasion and the first delayed follow-up test 2 days later. This result is consistent with the Wheeler et al. (2003) time frame of 48 h for the decay of knowledge. The results support the conclusion that students do forget knowledge after a scheduled test if that knowledge is not associated with new information that is being presented. The use of explicit spiral teaching methods and curricula where knowledge used in one testing occasion is seen as needed to understand new knowledge can help reduce the decay of student knowledge for at least two weeks (10–17 day) following a scheduled exam.

Conclusions

Myths about teaching and learning persist when they support the current culture of teaching and learning. Often this culture of teaching relies on the successful passing of knowledge from the expert (teacher) to the willing novice (diligent student) as the accepted way that teaching and learning take place. Any failures in this method are assumed to be the result of unwillingness or inability of the students to learn the content presented. In order to challenge this myth, the prevalent teaching and learning culture should be examined. If students are expected to learn what the teacher tells them then their inability to pay attention in lecture may result in failure. If the learning that they experience in lecture is not integrated at the level of understanding across topics versus memorization of individual topics, then knowledge may decay rapidly after students complete a scheduled exam. In addition, students might not be able to successfully answer essay questions on these topics if they have not been trained in what constitutes a correct, complete, and cogent answer. Finally, if the essay question is constructed with a high intrinsic (numerous concepts) and/or extrinsic (use of implicit assumptions) demand, it may exceed students’ ability to successfully address it.

Introducing new pedagogies into our present teaching and learning culture will not necessarily result in higher student achievement if these pedagogies are not implemented with attention to the tenets of how the brain operates and learning occurs. For instance, the use of clickers as a way to provide multiple self-assessment opportunities for students may not be effective if students cannot review and reflect on both the questions asked and the answers chosen. The use of essay questions on assessments may not live up to the potential of helping students better organize their knowledge if the analysis of what constitutes a complete and coherent essay is not discussed and practiced with students outside of a testing situation and reinforced with an explicit spiral curriculum.

New pedagogies that are consistent with what we know about how the brain operates and learning occurs can be successful if implemented accordingly. The use of interactive ConcepTests as delivered by clickers or chemical demonstrations that break the flow of information in a lecture can be effective in increasing student attention both during their use and in subsequent lectures. The use of clickers and demonstrations in lecture can help increase student attention by providing students with nontesting opportunities to evaluate their understanding of a concept and reflect on their learning.

Myths are difficult to dispel if the research that challenges them is not grounded in theory. A theoretical framework can be used to help explain the results of research in terms of the basic premise of the teaching culture being challenged. Without this larger picture, each research result appears as a stand-alone artifact that at best provides little more than a passing acceptance on the part of the reader. By contrast, within a theoretical framework, individual research results can contribute to the mounting evidence needed to challenge our current teaching and learning culture. Designing experiments that challenge existing beliefs about teaching and learning must address the tough issues and be well designed in order to provide convincing results. To ask the tough questions, the research should facilitate the collection of pertinent data. Such research will necessitate the development and use of new and targeted valid/reliable instruments. It is not adequate to rely on instruments that were designed for another purpose without proving that they can provide the necessary data to address the questions asked. Data from such experiments should be analyzed using the power of sophisticated statistics that adequately control for threats of drawing conclusions that are not valid. These statistics must be implemented with attention paid to whether they are the appropriate statistic for the question asked and if the necessary assumptions of the data pool are met before the statistic is applied. Without these safeguards, research could be produced that is not valid and reproducible. As a result of misusing statistics, more myths could be produced and the teaching/learning process could continue to be misunderstood. A consequence of poorly conceived and/or analyzed research is that students will continue to be categorized as lazy or noncaring when in reality they might not be able to effectively learn and demonstrate that learning due to inappropriate teaching and assessment methods. The purpose of research should be to challenge myths, not produce new ones.