Abstract
The term science competence describes the results of educational work regarding science and is therefore linked to many different fields of research. This makes competence one of the central issues in science education, on the one hand, and causes difficulties defining science competence in a useful way for research and teaching on the other hand. This chapter aims to define science competence with a rather cognitive focus, concentrating on three central aspects of being competent in science. First, the aspect of cognitive ability is discussed that underlines the idea of defining competence as an ability to solve problems. Secondly, the importance of content is detailed to argue that competence is domain specific. Third, the scientific literacy aspect is illustrated to prove that competence is represented by a decontextualized knowledge structure that is applied to specific and contextualized problems. These aspects lead to a definition of competence which is finally discussed from the perspective of measurement.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
For more than 50 years, the idea of competence has been discussed in science education and psychology to describe different kinds of capability to master a certain domain (Winterton et al. 2005). It can be used to describe the outcome of school education (Hartig et al. 2008) – such variables include emotional, volitional, cognitive aspects, required skills, abilities, and attitudes (Weinert 2001). However, it is a difficult concept to grasp as it can be investigated from many perspectives (Csapó 2004). Therefore, to come to a measurable construct we limit our view on competence to a cognitive perspective, as many researchers in this field do (Hartig et al. 2008), and leave out motivational aspects which were originally stressed by Robert White (1959).
Theoretical Perspectives on Competence
Science competence is understood as the underlying cause of successful or unsuccessful performance (Chomsky 1965), respectively, in the domain of science (Connell et al. 2003). For example, Dominique Rychen and Laura Salganik (2003) describe key competencies for future success in society. Willis Overton (1985) shows that the relation of competence and performance is influenced by many other variables of the situation and the person (cf. Bandura 1990). For example, the choice of mental models (Bao and Redish 2006) and argument (Zimmermann 2005) is dependent on the situation. The performance in tests is dependent, for example, on the time or the choice of items (Kalyuga 2006).
To increase the likelihood of a successful performance through teaching is an underlying idea in education (Csapó 1999). Since competence influences performance, many fields of science education are related to competence (Adey et al. 2007). In the following, we will outline fields related to competence, and how this contributes to the idea of applying structured knowledge (Albert 1994). The aim is to develop a model of competence (cf. Pellegrino et al. 2001) by linking intelligence, problem solving, and knowledge (Glaser 1983). Csapó describes a person’s ability to perform successfully in terms of three aspects (Csapó 2004): the cognitive aspect, the content aspect, and the literacy aspect as “the broadly applicable and social valuable knowledge” (Csapó 2004, p. 35). We will use these aspects to structure our discussion of the different fields, as it implements the idea of competence as a mixture of general and specific abilities and knowledge (Winterton et al. 2005).
Cognitive Aspect
Intelligence is a parameter summarizing general cognitive abilities and providing a measure for them (Lauren Resnick 1976). It is thought to be more or less independent from domain and content (Adey et al. 2007). However, David McClelland (1973) shows that intelligence has only limited importance in describing successful performance in a specific domain. He suggests that a theory of competence would result in a list of activities used by successfully performing individuals (McClelland 1973).
Such a theory could be the taxonomy of Benjamin Bloom (1956). It is one example of models that rank abilities by cognitive processes with the transfer process as the most demanding one (Klauer 1989). It was further elaborated by Lorin Anderson and David Krathwohl (2001), who rank activities by analyzing which abilities are needed to perform successfully in the respective activities.
Another option would be the expert and novice paradigm. Experts can be differentiated from novices by the problem-solving strategies they have at hand (Boshuizen et al. 2004). That is, these strategies are part of their competence (Sternberg and Grigorenko 2003). With cognitive load theory (Sweller 1994) it can be argued that the limited capacity of the working memory requires an elaborated knowledge structure to solve complex problems. Problem solving as a cognitive task, therefore, can be discussed under the perspective of general strategies (e.g., Dossey et al. 2004) as well as under a science-specific perspective considering science knowledge (Klahr and Dunbar 1988). In a nutshell, problem-solving tasks require a general and science-specific competence.
Content Aspect
In order to measure content-specific abilities, first of all the related content has to be described and structured (Albert 1994). School science content typically includes knowledge, typical procedures in science like modeling and experiments, or argumentation, and meta-knowledge about nature of science and scientific inquiry. Curricula and educational standards are the basis for the selection of content and the description of desired competencies. And despite every nation defining its own curriculum, there is an overlap in the choice of content and competencies (Parker et al. 1999).
The knowledge base of science is represented by mental models based on scientific theories and models that should be learned by students (Gentner and Stevens 1983). The structure of those mental models is described for many concepts in science, for example, for matter and its transformation (Andersson 1990), for energy (Lijnse 1990), or for mechanical waves (Wittman et al. 1999). These mental models are based on concepts whereby students’ concepts might differ from scientific concepts of the same issue (Carmichael et al. 1990). Concepts and mental models are structured by the big ideas of science which are often described as basic concepts in science, for example, energy (Dawson-Tunik 2006) and matter (Liu and Lesniak 2006).
The role of experiments for school science is well investigated and widely discussed in science education (Lunetta 1998). Experiments are part of scientific working and therefore embedded into scientific inquiry which is seen as essential for learning science (Minstrell and van Zee 2000). Experiments are used for argumentation and reasoning in science (Zimmermann 2005) fostering communication skills (Saab et al. 2007) and logical reasoning (Nunes et al. 2007). In this context analogies are used for modeling phenomena (Pauen and Wilkening 1997) or for illustrating certain concepts, for example, force (Palmer 1997).
Meta-knowledge, which is beliefs and knowledge about knowledge in a certain domain (Bromme 2005), is also part of science content in school (cf. American Association for Advancement in Science (AAAS) 1993; National Research Council (NRC) 1996). Meta-knowledge is described as the nature of science and, for example, the role of experiments in the scientific discovery process rather than the “how-to” of experiments. Nature of science allows for judging scientific findings and is useful for participation in adult life (Lederman et al. 2002).
Literacy Aspect
The Programme for International Students Assessment (PISA) refers to the concept of scientific literacy as an internationally consensual aim of education (Organisation for Economic Co-operation and Development (OECD) 1999). Scientific literacy is understood as a set of competences to be acquired as a result of education (Bybee 1997) and is substantially different from a scientist’s competence (OECD 1999). As the main difference, competence in the notion of scientific literacy requires detaching the content from the context. Although content is learned in specific situations, the ability to transfer is the main aspect of competence (Csapó 1999); that is, the ability to apply strategies in various contexts (Garner 1990) and to use mental models in different settings (Lijnse 1990). However, this is sometimes not even achieved by adults (Murray et al. 2005). This is due to the difficulty in transferring between domains (Roth 1979). Still, competence as the ability to detach science content from situations is seen as important for full participation in adult life (Connell et al. 2003).
In a more formal way and closer to the original meaning of Csapó’s literacy aspect, an individual’s literacy can be described by complexity. While complexity can be used with a rather qualitative meaning to distinguish between higher or lower cognitive processes (Kail and Pellegrino 1989) or reasoning and acting (Zelazo and Frye 1998), complexity can also be used to describe a hierarchy of structures within a system (Commons 2007). Since scientific knowledge could be seen as such a system with an inner structure (Gagné and White 1978), complexity can be used to rank solving processes (Williams and Clark 1997), compare different knowledge structures (Nicolis and Prigogine 1987), or describe different levels of the knowledge structure (Kauertz and Fischer 2006). The structure of knowledge is made up of elements, for example, scientific facts which are linked together by functional relations (Novak 1998). This structure represents basic concepts in science such as energy and system. Because basic concepts include a large number of scientific facts and relations (cf. Resnick and Ford 1981), an individual’s literacy is represented by the level of complexity on which the person can deal with the particular basic concepts.
Definition of Competence
The notion of competence as a developable capacity to detach science-specific cognitive processes and knowledge from one situation and apply it to scientific problems in a social setting is described by the Organization for Economic Cooperation and Development (OECD) in terms of scientific literacy:
Scientific literacy is the capacity to use scientific knowledge, to identify questions and to draw evidence-based conclusions in order to understand and help make decisions about the natural world and the changes made to it through human activity. (OECD 1999, p. 60)
This definition embraces all considerations described earlier and names possible indicators, such as uses knowledge, identifies questions, draws conclusions, and so on, to identify competence by large-scale assessment.
A Measurement Perspective on Competence
Competence as a multifacet variable (Csapó 2004) makes it necessary to define an inner structure of competence (Mislevy et al. 2002). This structure hypothesizes differences between specifications of competence which are theoretically caused by different content, for example, basic concepts, different cognitive activities, and different levels of competence or literacy. The structure can be illustrated by a list of abilities or by a grid; whereas in every cell of the grid specific abilities, skills, and so on are listed, classified by the assumed difference between those activities. Such a grid is not necessarily limited to two dimensions but could also have three dimensions, which would mean a cube, or even more than three dimensions. Since the lists of activities in each cell might be too long or unclosed, the cells could be described by the dimensions. Such dimensions could be the content as the first dimension, whereas any basic concepts make up one row, and as second dimension cognitive activities, with, for example, applying and transfer making up the columns. Each cell is then defined by a basic concept and a cognitive activity, for example, energy and applying. In this cell any ability would be registered that requires the application of the energy concept. Using this grid, the competence is structured in a competence model. The link between the competence model and the items of the test is established by task analysis (Jonassen et al. 1999). As a result of task analysis, each item can fit in one cell of the grid that represents the competence model.
Competence Models
Those models can be post hoc (e.g., OECD 1999, 2001) or a priori (e.g., Neumann et al. 2007) defined models. From a theoretical perspective, the a priori defined models are more valuable (Wilson 2005) since they are empirically testified, while post hoc models are informative for identifying possible critical elements of tasks (e.g., OECD 1999, 2001) but could fail to be reproduced in the next test (Klieme 2000). A sound a priori model as a basis for the test helps to validate its results, as the example of the force concept inventory illustrates (Hestenes and Halloun 2005).
The competence model for the PISA study was made up of two dimensions: scientific processes and content in an area of application. The dimension of processes contained five different processes; for the scientific concepts 13 major scientific themes with 13 areas of application were chosen. Each theme was combined with one area of application. Every cell in this grid (see Fig. 47.1) was described, for example, “[r]ecognising scientifically investigable questions using knowledge of human biology applied in the area of science in life and health” (OECD 1999, p. 66).
Validity of Competence Measurement
Multidimensionality of most competence models makes it difficult to prove their validity. Different kinds of validity need to be considered (Wilson 2005): validity concerning the assumed inner structure, that is, there are as many different dimensions as considered in the a priori model (Hestenes and Halloun 2005); and validity concerning the goal of the assessment, that is, the test measures competence comparable to the PISA tests (cf. Pellegrino et al. 2001). Usually those questions are already considered during test development by the underlying model (Harmon et al. 1997) and tested with the empirical data by comparing the empirical structure with the theoretical structure (e.g., Acton et al. 1994). While competence models have a complex structure, and competence and performance are merely linked by a certain probability moderated by many random influences (e.g., the context; Bao and Redish 2006), a large number of test items and large sample sizes are needed.
Since large-scale competence assessment needs many items, sophisticated statistical procedures like the item-response theory (IRT) are required (cf. OECD 2001). The IRT allows for computing a student’s probability for solving items of a certain difficulty and therefore combines the values of student competence and item difficulty on the same scale (van der Linden and Hambleton 1996). Then one item could illustrate the competence of all students with a score equivalent or below the value of the item. Therefore, the relation between items and students can be scrutinized and the underlying structure of the item sample (which in fact is the competence model) and student sample characteristics (which could include gender, age, social background, and so on) can be investigated (cf. Rost 1990).
Relevance of Results from Large-Scale Competence Tests
The relation between competence models and teaching is rather vague. Although competence measurement focuses on the results of learning, the underlying model cannot tell the teacher how to promote learning in the learning group. The model is rather a structure for reachable learning goals. More often, the results of large-scale-competence assessments cannot be related to individuals or even classes since the individuals’ measurement errors are out of scale.
Therefore, competence measurement is more informative for educational administration considering the complete educational system (e.g., OECD 1999, 2001). For example, in Germany the results of the Programme for International Student Assessment (PISA) led to a major change in the educational system and the establishment of national education standards (KMK 2004). By comparing nations based on the competence of their students the further development of the economy should be ensured (OECD 1999), and social chances become comparable and can be ensured as well (Millar 2004).
Empirically testified competence models can also inform curriculum development (Driver et al. 1994). Competence models could be a reference point to compare curricula (Kumar and Berlin 1998) and cut them down to relevant aspects, or to develop international curricula (Parker et al. 1999).
Future Research Perspectives on Competence
Because the results of large-scale assessments could not inform teachers about the individual’s developmental competence level, an individual diagnostic tool for teachers and researchers is needed (Hartig et al. 2008). This would require more detailed models taking different methods of development into account.
The performance in social settings and competence needs to be investigated as a matter of validity. As different studies showed (Lijnse 1990; Rychen and Salganik 2003), the context strongly influences the relation between performance and competence. One aspect could be a linkage between science competence in school and later vocational competence (Rothwell and Lindholm 1999). Since competence in terms of scientific literacy is meant to allow successful participation in society (OECD 1999, 2001) and this seems not to be sufficiently reached (cf. Murray et al. 2005), the long-run effect of increasing competence is worthy of investigation.
References
Acton, W. H., Johnson, P. J., & Goldsmith, T. E. (1994). Structural knowledge assessment: Comparison of referent structures. Journal of Educational Psychology, 86, 303–311.
Adey, P., Csapó, B., Demetriou, A., Hautamaki, J., & Shayer, M. (2007). Can we be intelligent about intelligence? Why education needs the concept of plastic general ability. Educational Research Review, 2, 75–97.
Albert, D. (Ed.). (1994). Knowledge structures. New York: Springer.
American Association for the Advancement of Science AAAS. (1993). Benchmarks for science literacy. New York: Oxford University Press.
Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objective. New York: Longman.
Andersson, B. R. (1990). Pupils’ conceptions of matter and its transformations (age 12–16). In P. L. Lijnse, P. Licht, W. de Vos, & A. J. Waarlo (Eds.), Relating macroscopic phenomena to microscopic particles: A central problem in secondary science education (pp. 12–35). Utrecht, The Netherlands: CD-ß Press.
Bandura, A. (1990). Conclusions: Reflections on nonability determinants of competence. In R. J. Steinberg (Ed.), Competence considered (pp. 315–362). New Haven, CT: Yale University Press.
Bao, L., & Redish, E. F. (2006). Model analysis: Representing and assessing the dynamics of student learning. Physical Review Special Topics – Physics Education Research, 010103–1–010103–16.
Bloom, B. S. (1956). Taxonomy of educational objectives: The classification of educational goals (1st ed.). New York: Longmans Green.
Boshuizen, H. P. A., Bromme, R., & Gruber, H. (Eds.). (2004). Professional learning: Gaps and transitions on the way from novice to expert. Dordrecht, The Netherlands: Kluwer.
Bromme, R. (2005). Thinking and knowing about knowledge: A plea for and critical remarks on psychological research programs on epistemological beliefs. In F. Seeger (Ed.), Activity and sign – Grounding mathematics education (pp. 191–201). Dordrecht, The Netherlands: Kluwer.
Bybee, R. W. (1997). Toward an understanding of scientific literacy. In W. Gräber & C. Bolte (Eds.), Scientific literacy, an international symposium (pp. 37–68). Kiel, Germany: IPN.
Carmichael, P., Driver, R., Holding, B., Phillips, I., Twigger, D., & Watts, M. (1990). Research on students’ conceptions in science: A bibliography. Leeds, UK: University of Leeds.
Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press.
Commons, M. L. (2007). Introduction to the model of hierarchical complexity. Behavioral Development Bulletin, 13, 1–6.
Connell, M. W., Sheridan, K., & Gardner, H. (2003). On abilities and domains. In R. J. Sternberg & E. L. Grigorenko (Eds.), The psychology of abilities, competencies, and expertise (pp. 126–155). Cambridge, UK: Cambridge University Press.
Csapó, B. (1999). Improving thinking through the content of teaching. In J. H. M. Hamers, J. E. H. van Luit, & B. Csapó (Eds.), Teaching and learning thinking skills (pp. 37–62). Lisse, Switzerland: Swets and Zeitlinger.
Csapó, B. (2004). Knowledge and competencies. In J. Letschert (Ed), The integrated person. How curriculum development relates to new competencies (pp. 35–49). Enschede, The Netherlands: Consortium of Institutions for Development and Research in Education in Europe (CIDREE).
Dawson-Tunik, T. L. (2006). Stage-like patterns in the development of conceptions of energy. In X. Liu & W. Boone (Eds.), Applications of Rasch measurement in science education (pp. 111–136). Maple Grove, MN: Jam Press.
Dossey, J., Hartig, J., Klieme, E., & Wu, M. (2004). Problem solving for tomorrow’s world: First measures of cross-curricular competencies from PISA 2003. Paris: OECD Publications.
Driver, R., Leach, J., Scott, P., & Wood-Robinson, C. (1994). Young people’s understanding of science concepts: Implications of cross-age studies for curriculum planning. Studies in Science Education, 24, 75–100.
Gagné, R. M., & White, R. T. (1978). Memory structures and learning outcomes. Review of Educational Research, 48, 187–222.
Garner, R. (1990). When children and adults do not use learning strategies: Toward a theory of settings. Review of Educational Research, 60, 517–529.
Gentner, D., & Stevens, A.L. (1983). Mental models. Philadelphia, PA: Lawrence Earlbaum.
Glaser, R. (1983). The role of knowledge. Technical report. Pittsburgh, PA: University PA Learning and Development Center.
Harmon, M., Smyth, T.A., Martin, M.O., Kelly, D.L., Beaton, A.E., Mullis, I.V.S., et al. (1997). Performance assessment in IEA’s third international mathematics and science study. Chestnut Hill, MA: Boston College.
Hartig, J., Klieme, E., & Leutner, D. (Eds.). (2008). Assessment of competencies in educational contexts: State of the art and future prospects. Göttingen, Germany: Hogrefe & Huber.
Hestenes, D., & Halloun, I. (2005). Interpreting the force concept inventory – A response to Huffman and Heller. The Physics Teacher, 33, 502–506.
Jonassen, D. H., Tessmer, M., & Hannum, W. H. (1999). Task analysis methods for instructional design. Mahwah, NJ: Lawrence Erlbaum.
Kail, R., & Pellegrino, J. W. (1989). Menschliche Intelligenz (2nd ed.) [Human intelligence]. Heidelberg, Germany: Spektrum der Wissenschaft.
Kalyuga, S. (2006). Rapid cognitive assessment of learners’ knowledge structures. Learning and Instruction, 16, 1–11.
Kauertz, A., & Fischer, H. E. (2006). Assessing students’ level of knowledge and analysing the reasons for learning difficulties in physics by Rasch analysis. In X. Liu & J. Boone (Eds.), Applications of Rasch measurement in science education (pp. 212–246). Maple Grove, MA: Jam Press.
Klahr, D., & Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12, 1–48.
Klauer, K. J. (1989). Teaching for analogical transfer as a means of improving problem-solving, thinking and learning. Instructional Science, 18, 179–192.
Klieme, E. (2000). Fachleistungen im voruniversitären Mathematik- und Physikunterricht: Theoretische Grundlagen, Kompetenzstufen und Unterrichtsschwerpunkte [Achievements in pre-university math- and physics lessons: Theoretical basics, competence levels and lessons foci]. In J. Baumert, W. Bos, & R. Lehmann (Eds.), TIMSS III Dritte Internationale Mathematik- und Naturwissenschaftsstudie – Mathematische und naturwissenschaftliche Bildung am Ende der Schullaufbahn. Band 2: Mathematische und physikalische Kompetenzen am Ende der gymnasialen Oberstufe (TIMSS III Third international mathematics and science study – mathematics and physics competence at the end of upper secondary school)) (pp. 57–117). Opladen, Germany: Leske und Budrich.
KMK [Standing Conference of the Ministers of Education and Cultural Affairs of the Länder in the Federal Republic of Germany]. (2004). Bildungsstandards im Fach Physik für den mittleren Schulabschluss [Educational standards for physics at the end of compulsory school]. München, Germany: Luchterhand.
Kumar, D., & Berlin, D. (1998). A study of STS themes in state science curriculum frameworks in the United States. Journal of Science Education and Technology, 7, 191–197.
Lederman, N. G., Abd-El-Khalick, F., Bell, R. L., & Schwartz, R. S. (2002). Views of nature of science questionnaire: Toward valid and meaningful assessment of learners’ conceptions of nature of science. Journal of Research in Science Teaching, 39, 497–521.
Lijnse, P. (1990). Energy between the life-world of pupils and the world of physics. Science Education, 74, 571–583.
Liu, X., & Lesniak, K. (2006). Progression in children’s understanding of the matter concept from elementary to high school. Journal of Research in Science Teaching, 43, 320–347.
Lunetta, V. N. (1998). The school science laboratory: historical perspectives and contexts for contemporary teaching. In K. Tobin & B. Fraser (Eds.), International handbook of science education (pp. 249–264). Dordrecht, The Netherlands: Kluwer.
McClelland, D. C. (1973). Testing for competence rather than for “intelligence.” American Psychologist, 28(1), 1–14.
Miller, J. D. (2004). Public understanding of, and attitudes toward, scientific research: What we know and what we need to know. Public Understanding of Science, 13, 273–294.
Minstrell, J., & van Zee, E. H. (Eds.). (2000). Inquiring into inquiry learning and teaching in science. Washington, DC: AAAS.
Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (2002). On the roles of task model variables in assessment design. In S. Irvine & P. Kyllonen (Eds.), Item generation for test development (pp. 97–128). Mahwah, NJ: Lawrence Erlbaum.
Murray, T. S., Clermont, Y., & Binkley, M. (Eds.). (2005). Measuring adult literacy and life skills: New frameworks for assessment. Ottawa, Canada: Statistics Canada.
National Research Council (NRC) (1996). National science education standards. Washington, DC: National Academy Press.
Neumann, K., Kauertz, A., Lau, A., Notarp, H., & Fischer, H. E. (2007). Die Modellierung physikalischer Kompetenz und ihrer Entwicklung [Modelling physics competence and its development]. Zeitschrift für Didaktik der Naturwissenschaften, 13, 103–132.
Nicolis, G., & Prigogine, I. (1987). Die Erforschung des Komplexen. Auf dem Weg zu einem neuen Verständnis der Naturwissenschaften [Discovering the complex: On a way to a new understanding of the sciences]. München, Germany: Piper.
Novak, J. D. (1998). Learning, creating, and using knowledge: Concept maps as facilitative tools in school and corporations. Mahwah, NJ: Lawrence Erlbaum.
Nunes, T., Bryant, P., Evans, D., Bell, D., Gardner, S., Gardner, A., & Carraher, J. (2007). The contribution of logical reasoning to the learning of mathematics in primary school. British Journal of Developmental Psychology, 25, 147–166.
Organisation for Economic Cooperation and Development (OECD) (1999). Measuring student knowledge and skills: A new framework for assessment. Paris: OECD Publication Service.
Organisation for Economic Cooperation and Development (OECD) (2001). Knowledge and skills for life: First results from the OECD programme for international student assessment (PISA) 2000. Paris: OECD Publication Service.
Overton, W. F. (1985). Scientific methodologies and the competence–moderator–performance issue. In E. Neimark, R. Delisi, & J. Newman (Eds.), Moderators of competence (pp. 15–41). Hillsdale, NJ: Erlbaum.
Palmer, D. (1997). The effect of context on students’ reasoning about forces. International Journal of Science Education, 19, 681–696.
Parker, W. C., Ninomiya, A., & Cogan, J. (1999). Educating world citizens: Toward multinational curriculum development. American Educational Research Journal, 36, 117–145.
Pauen, S., & Wilkening, F. (1997). Children’s analogical reasoning about natural phenomena. Journal of Experimental Child Psychology, 67, 90–113.
Pellegrino, J. W., Chudowsky, N., & Glaser, R. (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academic Press.
Resnick, L. B. (1976). The nature of intelligence. Hillsdale, NJ: Lawrence Erlbaum.
Resnick, L. B., & Ford, W. W. (1981). The psychology of mathematics for instruction. Hillsdale, NJ: Erlbaum Associates.
Rost, J. (1990). Rasch models in latent classes: An integration of two approaches to item analysis. Applied Psychological Measurement, 14, 271–282.
Roth, W.-M. (1979). Situated cognition and assessment of competence in science. Evaluation and Program Planning, 21, 155–169.
Rothwell, W. J., & Lindholm, J. E. (1999). Competency identification, modeling and assessment in the USA. International Journal of Training and Development, 3(2), 90–105.
Rychen, D. S., & Salganik, L. H. (Eds.). (2003). Key competencies for a successful life and a well-functioning society. Seattle, WA: Hogrefe.
Saab, N., van Joolingen, W. R., & van Hout-Wolters, B. H. A. M. (2007). Supporting communication in a collaborative discovery learning environment: The effect of instruction. Instructional Science, 35, 73–98.
Sternberg, R. J., & Grigorenko, E. L. (Eds.). (2003). The psychology of abilities, competencies, and expertise. New York: Cambridge University Press.
Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learning and Instruction, 4, 295–312.
van der Linden, W. J., & Hambleton, R. K. (1996). Handbook of modern item-response theory. Berlin, Germany: Springer.
Weinert, F. E. (2001). Concept of competence – A conceptual clarification. In D. S. Rychen & L. H. Salganik (Eds.), Defining and selecting key competencies (pp. 45–65). Göttingen, Germany: Hogrefe and Huber.
White, R. W. (1959). Motivation reconsidered: The concept of competence. Psychological Review, 66, 297–333.
Williams, G., & Clark, D. (1997). Mathematical task complexity and task selection. In D. M. Clarke, P. M. Horne, L. Lowe, M. Mackinlay, & A. McDonough (Eds.), Mathematics. Imagine the possibilities (pp. 406–415). Brunswick, Victoria: Mathematics Association of Victoria.
Wilson, M. (2005). Constructing measures: An item response modelling approach. Mawah, NJ: Lawrence Erlbaum.
Winterton, J., Delamare-Le Deist, F., & Stringfellow, E. (2005). Typology of knowledge, skills and competences: Clarification of the concept and prototype. Thessaloniki, Greece: European Centre for the Development of Vocational Training (CEDEFOP).
Wittman, M. C., Steinberg, R. N., & Redish, E. F. (1999). Making sense of how students make sense of mechanical waves. The Physics Teacher, 37, 15–21.
Zelazo, P. R., & Frye, D. (1998). Cognitive complexity and control: II. The development of executive function in childhood. Current Directions in Psychological Science, 7, 121–126.
Zimmermann, C. (2005). The development of scientific reasoning skills: What psychologists contribute to an understanding of elementary science learning (Report to the National Research Council Committee on science learning kindergarten through eighth grade). Retrieved August, 30, 2005 from, http://www7.nationalacademies.org/bose/Corinne_Zimmerman_Final_Paper.pdf
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer Science+Business Media B.V.
About this chapter
Cite this chapter
Kauertz, A., Neumann, K., Haertig, H. (2012). Competence in Science Education. In: Fraser, B., Tobin, K., McRobbie, C. (eds) Second International Handbook of Science Education. Springer International Handbooks of Education, vol 24. Springer, Dordrecht. https://doi.org/10.1007/978-1-4020-9041-7_47
Download citation
DOI: https://doi.org/10.1007/978-1-4020-9041-7_47
Published:
Publisher Name: Springer, Dordrecht
Print ISBN: 978-1-4020-9040-0
Online ISBN: 978-1-4020-9041-7
eBook Packages: Humanities, Social Sciences and LawEducation (R0)