Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

The paradox of science education is that its goal is to impart new schemata to replace the student’s extant ideas, which differ from the scientific theories being taught. (Carey, 1986, p. 1123)

Researchers looking at student thinking and learning in science have used a wide range of terms to describe what they are exploring (Abimbola, 1988), but when Black and Lucas edited a 1993 volume about the topic they ‘settled on the suggestion of Children’s Informal Ideas in Science as a relatively neutral expression’ (Black & Lucas, 1993a, p. xii). Presumably, part of this neutrality comes from ‘idea’ being an everyday term that each of us understands in terms of our subjective experience.

The Idea of Ideas

The Oxford Companion to the Mind notes that ideas ‘might be called “the sentences of thought”. They are expressed by language, but underlie language – for the idea comes before its expression’ (Gregory, 1987, p. 337). Although we may all feel we are comfortable with what an idea is, another reference work warns that in philosophy ‘idea’ has been ‘a term that has had a variety of technical usages’ and notes that ‘modern philosophers prefer more specific terms like “sense datum”, “image”, and “concept”’ (Brockhampton, 1997, p. 262). However, the same source notes that ‘an innate idea [sic] is a concept not derived from experience’ (ibid).

Another term that is commonly used is ‘thoughts’ in the sense of a thought being ‘an idea or a pattern of ideas’ (Watson, 1968, p. 1150). I will therefore use the terms ‘idea’ and ‘thought’ interchangeably.

The Source of Thoughts and Ideas

Given that individuals such as Jean (our hypothetical representative science learner, from the previous chapter) have ideas, this raises the question of how thoughts arise: How do people have ideas, and what are they based upon? An obvious answer is that ideas derive from sensing the world – but clearly not all ideas can be derived directly from sensory experience. Ideas such as the notion of the electron, the second law of thermodynamics and the alternative conception that exercising gives you energy are not based upon direct observations in any straightforward and simple way.

Sensation and Perceiving the World

One key part of our subjective experience is that we are aware of our surroundings, that is, we form mental ‘images’ of the external physical world. The existence of a physical world in which we collectively exist was one of the shared commitments of researchers in science education that was assumed earlier. Indeed, this would seem to be a necessary ontological commitment of science – scientific enquiry would seem to presuppose an objective and relatively stable physical world. I can see my computer and the rain on my study window, and I can hear music playing and, now that I am paying attention, the previously unnoticed, but now irritating, sound of my fingers depressing keys on my computer keyboard.

So one aspect of mental experience is sensory: the body’s sensory system allows us to experience and think about our immediate environment. So the ideas a person has at least in part derive from their experiences of an external world. So Fig. 4.1 shows that somehow objects and events in the world can be sensed and presented to consciousness. There are clearly two large areas of simplification here, relating to how our body is able to sense the environment and how those sensory inputs lead to conscious experiences.

Fig. 4.1
figure 1

Sensing the world represented as an unproblematic process

The Apparatus of Sensation

At the functional/systems level of description (see Chap. 3), we are able to experience the world because we have systems components which act as a ‘sensory interface’: that is, we are equipped with apparatus which converts stimuli in the external world into a form of information that can be processed within the cognitive system. This apparatus is our eyes, ears and other sensory organs. However, using the less familiar systems language of a ‘sensory interface’ (see Fig. 4.2) is useful because it can remind us of something that we may otherwise take for granted. Sensation involves representing – or coding – one kind of thing (a pattern of illumination of the retinal cells, say) in another form (electrical signals leading to sequential activation of neurons in the nervous system).

Fig. 4.2
figure 2

Sensing the word necessarily involves representing external stimuli by coding the signal into a form that can be processed in the nervous system

This is important, because any such translation from one form into another can only offer a limited fidelity to the original stimulus. This is familiar from technological applications. Telephone lines tend to lead to audio signals that lose much of the frequency range of the input. Pictures presented on video links may break up into an unrealistic staccato series of images. Photographs or screen images have limited resolution. Portable music players offer a range of file formats, and those which support the storage of the most tunes (such as mp3 files) do so by sampling original so-called lossless sources in ways that lose some of the original information.

A story I once heard that amused me, although quite probably apocryphal, concerned a man who is reputed to have approached the artist Pablo Picasso in an art gallery and complained that his pictures of women looked nothing like women. The man is supposed to have reached for his wallet and pulled out a photograph of his own wife. Showing this to Picasso, the critic claimed that this was what a real woman looked like. Picasso is said to have then asked if the man’s wife was really like the picture, and – on having this confirmed – commented that the man’s wife was very small and rather flat.

Representations are, of course, just that. However, if we are very familiar with the representation, it may become easy to forget this – after all we commonly use representations to stand for, and to operate on and so think about, other things as if they were those things. It is often appropriate and more productive to come to see the flickering pattern on the television screen as the newsreader, to see the icon on the computer screen as the hard drive and to see the chemical equation as the reaction.

If asked to reflect we might well say these symbols only stand for other things, but we learn to react to them and act on them as if they are the things. When I manipulate my computer mouse in certain ways, I conceptualise what I am doing as physically ‘opening’ my hard drive and imagine I am doing this at the point where I see the screen cursor overlaying the desktop icon on the screen. If I stop to think, I know that the drive is not located there, and that the images on the screen merely represent operations elsewhere in the computer system, but because the screen acts as part of the interface for interacting with my computer, it is more effective to operate with the screen images as if they are drives, applications, files, etc.

It is tempting to suggest that there is a parallel here with the previous chapter: that the computer screen with its visual imagery is the analogy of consciousness, reflecting particular representations of underlying processes in the computer. However, it is probably not productive to delve too deeply into such a comparison, as the observer of the screen is outside the computer system.

Representations May Seem Realistic

Most readers will have seen that with current technology it is possible to produce images of solids at the level of individual atoms. Many magazines and books carry reproductions of such images. They look, to all intents and purposes, like photographs of atoms. However, they are not. By scanning across a surface with a suitable probe, it is possible to detect the electric field that we interpret as due to the submicroscopic entities, such as ions or molecules, conjectured to make up the structure. By adjusting the position of the probe to keep the measured field constant, it is possible to build up a contour map of the surface – at least if we assume the field represents the ‘position’ of the surface, of course. (Whether the notion of a surface is actually appropriate at the submicroscopic level is a moot point.) This information can be coded using a false-colour scale – and, voila, we have an image, which looks like a photograph of the surface at the atomic level. The process is in some ways similar to taking sonar readings of a riverbed that can be built up into a contour map of river depth.

Similarly, most of us are familiar with seeing television images from infrared cameras – where radiation that is not energetic enough to be detected by our retinas is detected by cameras and then represented as a visible image on the display (with a scale of greys or a spectrum of colours representing the temperature of emitting objects from different parts of the scene) that can activate our retinal cells. Similar techniques are used in astronomy. For example, satellites that image the earth at different frequencies can be used to build up coloured photographs of the surface. However, the frequencies detected by the sensors do not respond to the three primary colours of light of the human visual system. This can lead to very impressive and sometimes quite beautiful false-colour images highlighting different features to those which would be salient to the human optical system (Sheffield, 1981).

In the same way, planetary probes sent out into the solar system can use radiation in the radio region of the electromagnetic spectrum to produce ‘false-colour’ images of other planets and of their moons. This can of course be seen as a more sophisticated version of radar, where objects can be detected using reflections of radio signals that are converted to visible blips and audible bleeps for the human operator.

The term ‘false colour’ however seems to imply that, by contrast, there is a standard for translating radiation signals into ‘true’ colour. Most humans have similar visual systems. Although we cannot know if we experience colours the same subjectively (i.e. as qualia), the apparatus we each have for detecting light is generally physiologically similar, and most people are sensitive to much the same range of ‘visible’ frequencies of radiation. However, there are exceptions. Some people do not have the normal pattern of cones firing in the retina and suffer from ‘colour blindness’. This does not actually mean not seeing any colours, but missing some of the usual discriminations. Arguably, people who are colour-blind see in ‘false colour’ compared to most people, but this would be a somewhat arbitrary judgement based upon taking ‘normal’ human vision as the standard.

This would be arbitrary because there is nothing absolute about the specific nature of human vision: presumably it is an optimisation of match between the general organic properties of humans and the environment in which we live, subject to whatever constraints operated during evolution such as available genes and proteins. If, hypothetically, humans had evolved on Mars, it seems likely the visual system would have been better matched to discriminate in that environment and would not have operated so well here on earth.

Some insects have eyes with quite different frequency responses to human eyes and in particular can detect (i.e. what for them is visible) radiation in (what for humans is) the ultraviolet region of the electromagnetic spectrum (Peitsch et al., 1992). Although there are variations from species to species, it has been found that Lepidoptera species (e.g. butterflies) may have receptors with peak sensitivity in the UV as well as other receptors with peak sensitivity in the blue, green and red regions of the spectrum. Hymenoptera species such as bees may lack ‘red’ receptors and so have three main types of colour receptors, like humans, but detecting primarily green, blue and ultraviolet frequencies. By human standards it would seem that bees have ‘false-colour’ vision, but if eschewing an anthropocentric perspective, it would seem just as fair to suggest that, by bee standards, humans have false-colour vision.

The point here is that as vision involves the conversion of one kind of entity (a pattern of radiation) into a completely different material form (electrical activity in nerve cells), there is no ‘correct’ standard for the conversion process, so bee vision is no more or less intrinsically faithful to the world sensed than human vision. Similar arguments apply to the other senses. From a constructivist perspective, it is not meaningful to ask which system of representation offers the best fidelity to reality: rather the focus should be on the extent to which systems for representing information visually in cognitive systems facilitate the construction of models of the external world which support intelligent action in/on that external world. Having a visual system supports the development of effective mental models (see Chap. 11) that allow us to act in and upon the world in desired ways (i.e. models that show good ‘fit’ to the external world in terms of our experiences, Glasersfeld, 1989) is as near as we get to ‘seeing the world as it is’ – because seeing an object is not something that can be inherent in the object itself but is always the outcome of an interaction between the world and a particular sensory system.

Sensory Representation Involves a Coding System

Our senses are then based on transducers, units that convert energy from one form to another, so sensory information is necessarily represented in the nervous system using a coding process that has evolved to support us in effectively modelling the external world. This coding process allows different patterns of stimulation to lead to different patterns of neural activity. Presumably the coding process has evolved over a very long period of time and has been subject to natural selection. Therefore, the way our sensory organs code stimuli is not only non-arbitrary but has been proved in the field to be a very effective system for surviving in our environment. That is reassuring, but the bee has evolved a somewhat different sensory interface which is coding data somewhat differently when in the ‘same’ environment and which has also proved fit for survival. Bees and humans are rather different – scale, form of locomotion, reproductive genetics, etc. – and perhaps have different priorities in their environments. For example, the vision of bees reveals patterns on flowers that are not visible to humans, patterns that have been construed as guiding the bees to the nectar and so pollen. So it should not be surprising if effective solutions for sensing the environment are different in the two cases. However, this does underline that sensing is inherently a process of representing – of modelling stimuli in electrical activity.

Sensory Information Is Selectively Filtered Before Reaching Awareness

However, this is only the first of a number of complications in building a mental model of our surroundings.

… most sensory signals probably do not reach conscious awareness but many of them lead to modified responses and behaviours, as in the tactile and proprioceptive signals that influence simple everyday postural and walking activities, which have therefore clearly been detected and utilized in complex brain functions. (Libet, 1996, p. 97)

Sense organs do not send nerve fibres directly to the cortex but rather to subcortical areas such as the thalamus, which then have connections into the cortex (Changeux, 1983/1997). Neural ‘nuclei’ in the brainstem are considered to control pathways into the cortical areas that undertake the ‘higher-level’ analysis of information.

When concentrating, we may become oblivious to what is around us (Csiks-zentmihalyi, 1988), and even when we are aware of our surroundings, we are usually only aware of a fraction of the available sensory information. I had not noticed the sound of my typing, until I reached the point above where I wished to give an example of what I was sensing. We might notice a change in engine tone where we had not been consciously aware of an engine, or spot movement in a hedge that we had not previously noticed was in our field of vision. So there is a (considerable) degree of filtering of sensory input, and what we perceive is only a part of what we sense.

This relates to the body’s internal sensory monitoring as well. Propriosensory nerves provide information to the brain on the position of the parts of the body, but we are usually only aware of this input when involved in coordinating movement requiring particular attention. In terms of the earlier discussion of the relationship between consciousness and brain function, an interpretation is that most sensory information can be effectively processed within the brain without reaching the ‘higher levels’ of processing that are associated with consciousness. Indeed, Changeux (1983/1997) reports that a baby born without a cortex and so presumably with no conscious awareness will show many expected behaviours such as to sleep and wake, to suck and to cry.

This leads to a slightly more sophisticated model (see Fig. 4.3) of what occurs as a result of sensory input than that presented earlier in the chapter. Sensory input is filtered within a part of the system, which acts as a kind of ‘post-in’ department. Some sensory information is directed to the (executive) processing module that leads to conscious awareness, and some will be directed to other parts of the system where the information will be processed and possibly acted upon through motor responses but without involvement of conscious thought.

Fig. 4.3
figure 3

Sensory information is filtered and is channelled selectively to different parts of the system

Executive and Non-executive Processing Modules

In Fig. 4.3, the part of the cognitive system where processing is associated with consciousness has been labelled as the ‘executive’ module (Parkin, 1993). The term ‘executive’ is used in a number of contexts to refer to certain key cognitive functions and/or to the systems that undertake those functions in the system. The term will be met later in the book in the context of discussing thinking and memory.

To indicate that sensory signals as representations of sensory information may be processed and acted upon within the system without the involvement of this executive module, Fig. 4.3 includes a different processing unit that operates at a subconscious, automatic level. This has been labelled simply as a ‘process operator’ to indicate that it processes information without having executive function. The importance of preconscious processing will be discussed further below.

Sensory Information Is Patched-Up Before Being Presented to Consciousness: ‘Filling-In’

The human sensory apparatus has various limitations due to its physical form. Some of these are obvious – we do not have eyes in the back of our heads, so our visual fields are limited but we are not really aware of the edges of the visual field. Questions about what we experience just outside the visual field (nothing, blackness, grey) do not usually occur to us and are akin, given the currently widely accepted model of a finite but unbounded universe that originated in a singularity, to questions of what there was before the formation of time or what lies outside our universe.

More interesting are features like the ‘blind spot’. The mammalian eye has evolved with the retinal cell axons, which send the signals from the cell to the brain, on the ‘outside’ of the retinal layer – that is, in the vitreous humour – so that they have to pass through the retina. This happens for all the retinal receptor cells at the same area of the retina. So there is an area of each retina where the fibres from the receptors pass through the retina to form the optic nerve. Although not physically a large area, this is still a significant patch within the retina. Moreover, it is not near the periphery of the light sensitive area, but relatively central. So, on closing one eye, our visual field should have a ‘hole’ where nothing can be seen, although we do not notice any such gap. However, the blind spot can readily be demonstrated by locating a dot in the correct position so that it will become invisible when it falls within the blind spot of the eye’s visual field because light reflected from the dot falls on the blind spot and not the surrounding receptors.

Our subjective experience (i.e. at the mental level) is of a continuous view of the scene, where actually there is a part of the scene that we are not seeing – where there is a dot that is now invisible. This is due to a feature of the cognitive system known as filling-in. The term can be explained by a simple analogy. Consider a science educator who was part of a team researching learners’ understanding of the phases of the moon and who was given by another member of the team a set of scanned images of drawings collected from students participating as informants in the research. Consider that one of the images looked like the left-hand part of Fig. 4.4.

Fig. 4.4
figure 4

Filling-in a copy of a student’s diagram with an apparent flaw

The researcher might well interpret the large shape at the top of the image, labelled as ‘s’ as representing the sun (with ‘m’ as the moon and ‘e’ as the earth). The researcher might also interpret the shapes as meant to be circles that act as two-dimensional representations of the approximately spherical bodies being represented. However, the representation of the sun is slightly odd – it shows a round body but with a big dip, as if a part has been removed. Perhaps the student thinks that the sun is almost round, but with a chunk missing – but that seems unlikely.

The reader will probably agree that it is not likely that the student thinks the sun has this deformed shape – but it is worth noting that this is a judgement made totally interpedently of the data, or any other information about the individual student, and so is based on background knowledge deriving from previous sources that we judge should apply to this student’s ideas. Clearly if we are interested in understanding the ideas of individuals, there will always be a danger of interpreting them in terms of what we have learnt from other individuals. However, having noted the potential for misinterpretation, and as this is a hypothetical case, we will allow the assumption.

The researcher might surmise that this is a flaw in the representation – perhaps during the scanning process something on the scanner glass, or stuck to the original drawing, obscured part of the image. The researcher may therefore decide that the image needs to be amended so it is a better representation of the original and uses a graphics programme to modify the image to give the right-hand version. If the amended figure is included in a research report, the authors might omit to mention that the image had been ‘touched up’ to correct an apparent flaw. If you were to read the report, then as far as you are aware, the published figure (the right-hand version) is a faithful representation of the student’s original drawing.

Now this is simply meant as an analogy, because something similar happens when the cognitive system fills in the blind spot. The sensory interface (the retina) sends signals relating to the stimulus (light falling on the retina) with no information coming from the blind spot where there are no receptor cells. A separate part of the cognitive system processes (‘analyses’) the signals and in effect decides there will not be a hole in the external world being represented, so it fills it in by extrapolating from the available information. In the part of the cortex which processes visual signals, there is ‘a patch of cortex corresponding to each eye’s blind spot that receives input from the other eye as well as from the region surrounding the blind spot in the same eye’ (Ramachandran, 2003, p. xv).

That processed signal is then sent to other parts of the cognitive system, so that the component of the system that gives rise to conscious experience, where vision is experienced, is not aware that the information received has been amended to correct an apparent flaw in the original data from the retina – just as the reader of the hypothetical research report was not told that the representation of the student’s drawing has been altered.

Most of the time this works well, as we have two eyes, and even when we keep one eye closed, the part of our surroundings that is not registered because of the blind spot would usually appear as a continuation of adjacent areas – so filling-in can produce an accurate rendition. However, the demonstration with the disappearing spot shows that when the part of the scene missed by the blind spot is not continuous with its surroundings, the subconscious processing of the sensory signal interpolates inappropriately. An analogy here is with the student taking readings of water being heated in beaker and extrapolating the temperature series 14, 34, 56, 75 and 94 °C by assuming that the next reading will be approximately 115 °C.

Under these conditions, the processed signal gives false information about part of our surroundings, reporting what is not there and missing what is:

What I mean by filling-in is simply this: that one quite literally sees visual stimuli (e.g., patterns or colours) as arising from a region of the visual field where there is actually no visual input. … In neural terms, this means that a set of neurons is being activated in such a way that a visual stimulus is perceived as arising from a location in the visual field where there is, in fact, no visual stimulus. (Ramachandran, 2003, p. xv, italics in original)

Filling-in has presumably evolved, and been retained, as part of the way our cognitive systems function because it generally works – most of the time it corrects a defect in sensory input and supports accurate perception. However, clearly there are conditions under which it can mislead. In my analogy of the student drawing, the patching-up of the apparently faulty image would seem an appropriate thing for the researcher to do – but it was based on assumptions. Perhaps had the researcher asked the student about this before amending the diagram, it might have transpired that the unusual representation of the sun was not an artefact of copying but part of the deliberate design. Perhaps the student was meaning to show how a cloud can obscure part of the face of the sun: in which case the filling-in process could have denied the reader some interesting information about the student’s ideas.

Perception Is the Outcome of Active Processing of Sensory Information

Furthermore, we are usually aware of objects and events, when what we sense are patches of colour, tones and so forth. This is what is meant by ‘perception’: ‘the process of recognizing or identifying something; Usually employed of sense perception, when the thing which we recognize or identify is the object affecting a sense organ’ (Drever & Wallerstein, 1964, p. 206). Roth suggests that ‘the term perception refers to the means by which information acquired from the environment via the sense organs is transformed into experiences of objects, events, sounds, tastes etc’ (Roth, 1986, p. 81). A key term here is ‘transformed’. This again is something we tend to take for granted. The percept, the ‘mental product of the act of perceiving’ (Drever & Wallerstein, p. 206), is a conscious image of a bus or building or tree, etc., but of course our perception is not of an internal representation but rather that such objects are present in our surroundings.

By definition what is ‘in’ the nervous system when we perceive objects in our surroundings cannot be those objects – we do not have buses and building and trees in our central nervous systems, only representations of them, coded into electrical patterns of activation. This is obvious but reminds us that we have learnt to ‘see’ buses and building and trees and so forth without any independent standard of what they are ‘meant’ to look like. In terms of the system level of description (see Chap. 3, Table 3.4), stimuli (such as light reflecting from an object) are detected at the sensory interface (the retina in the case of light) and coded into electrical activity. The electrical representations of the stimuli are subject to processing (i.e. in circuits in the brain) in components that interpret the signals as indicating the presence of certain familiar features of our surroundings, and then the outputs of that level of processing are further signals that are sent to the ‘higher’ processing centre that is accessible to consciousness. Deese (1963, p. 400) has commented that ‘the world presented through our senses is a vast jumbled confusion of different sensations. We are able to deal with it only by cutting it down to the size of our mental processes’.

There are then there at least three system components involved in perception: the sensory interface, one or more subconscious processing components which analyse sensory signals and interpret them and the higher-level ‘executive’ component which receives the interpretations (see Fig. 4.5).

Fig. 4.5
figure 5

Perception involves interpretation of sensory signals

Perception May Involve Over-Interpretation

There is then a considerable level of interpretation of sensory information prior to that information being accessible to consciousness, or, put another way, ‘it is well known that making sense of our perceptual inputs is an ‘ill-posed’ problem and much ‘computation’ must be done to produce veridical solutions’ (Crick & Koch, 1990, p. 272). We are often most aware of this when that interpretation proves to be wrong: when on investigation something we see or hear or feel turns out not to be present. The whisper transpires to just be the wind; the person moving in the shadows is just a ‘trick’ of the light; the man in the moon is just a random pattern of craters illuminated by the sun. We recognise a friend, only to then realise we are waving to a stranger. So percepts can be misinterpretations of the external world.

Innate Bias in Perception

The human cognitive systems also seem to have innate, that is, built-in, biases. That is, we seem to be born with a tendency to interpret sensory information in particular ways. One example of this is ease at which we interpret patterns as faces. Recognising and identifying faces is clearly a very important part of operating as a social being, and when this aspect of cognition goes wrong, the deficit can make life difficult (Sacks, 1986). Babies seem to be able to recognise faces very early in life, and it is possible to produce quite simple iconic presentations of the face that are readily recognised as such: the emoticon, for example, ☺, originally typed simply as three punctuation marks:

:-)

This seems to be achieved due to a bias in the cognitive system to interpret quite minimal cues as faces, and the by-product of this is that people have seen faces in the moon or in images of the surface of Mars, in butter melting on toast, in geological formations and so forth. Figure 4.6 is a reproduction of a photograph I took at the English coast. In the image, I can see a man’s face looking down out of the clouds. I am perfectly aware that when I took the photograph, there was no giant man in the clouds and that such a thing would not be feasible. However, even though I know that the impression is produced by the over-interpretation of arbitrary features of the cloud formation, I still see the face.

Fig. 4.6
figure 6

Sometimes subconscious levels of processing present us with interpretations we know cannot be correct (Image first published in Taber & García Franco, 2009)

Perhaps when you first looked at the image printed here, you could not make out any face? If so, and you persevered, I suspect you too (re)cognised the face, even though you also know there is no face there.

Learnt Bias in Perception

Moreover, once you have recognised the face, once it has become a percept, it is difficult to avoid ‘seeing’, no matter how convinced you are that it is merely an artefact of the imaging process. Even when our cognitive systems are not innately biased, our experiences certainly bias their future operation. It has been suggested that ‘there is no such thing as an entirely new experience or memory. All that is apparently new to us happens in the context of old and well-established memory’ (Fuster, 1995, p. 4).

The part of the cognitive system that consciously processes the perception does not seem to have a means of providing feedback to the subconscious component that interprets the sensory information to modify its analysis. The face will continue to be perceived regardless of our executive judgement that it is not present in the environment.

Key points to note from this discussion so far are that:

  1. 1.

    Attention to sensory information is selective.

  2. 2.

    Our senses only provide a representation of what is in the external world.

  3. 3.

    Perception involves subconscious analysis and interpretation.

Perceiving Communication from Others

The discussion above considers an object or event in the environment that is being sensed and perceived and provides a source for the conscious ideas of the learner. Human beings have developed communication systems based on signals and signs. These signals and signs are an important part of the individual’s environment and are an important source of their sensory input. In the context of research in science education, the importance of thinking about student learning in terms of human social interaction has been increasingly recognised in recent years (Roth & Tobin, 2006; Scott, 1998; Solomon, 1993). This would include listening to the teacher, reading textbooks and discussing schoolwork with peers.

In an important sense, perceiving communication from others can be considered to be subsumed under the discussion above – Fig. 4.5 can still offer a general scheme to cover these cases. However, there are features of these situations that mark them out as particular. For one thing, language appears to be supported by areas of the brain that have evolved to specialise in language processing.

Moreover, communication is of special interest because it involves leaving traces in the environment that are intended to represent particular meanings. Perception of communication therefore raises the question of to what extent the communicatee is able to reconstruct the intended meanings. This issue will be explored later in the book, when considering the nature of understanding (Chap. 6).

Paying Attention: Distinguishing Subliminal and Preconscious Processing

It has been proposed that there are two distinct criteria for whether sensory signals, the electrical impulses from the ‘sensory interface’, result in conscious awareness (Kouider & Dehaene, 2007). One concerns the strength of the signal – that below a certain threshold a signal will not lead to processing at high enough levels to lead to conscious awareness. It has been suggested that this type of subconscious processing be referred to as subliminal. Subliminal processes may be attended to by the system, but not consciously (see Fig. 4.7).

Fig. 4.7
figure 7

Different levels of processing in the cognitive system

However, signals that meet the threshold do not necessarily lead to conscious awareness. Rather, Kouider and Dehaene suggest such signals are stored in buffers available to the executive processing module, where they may be accessed if attended to, but that they may not be accessed if attention is elsewhere, in which case they will be lost when the buffer is effectively refreshed with a new signal. Kouider and Dehaene suggest this type of processing should be referred to as preconscious:

Preconscious processing occurs when processing is limited by top-down access rather than bottom-up strength. According to the theory, preconscious processes potentially carry enough activation for conscious access, but are temporarily buffered in a non-conscious store owing to a lack of top-down attentional amplification (for instance owing to transient occupancy of the central workplace system). (Kouider & Dehaene, 2007, p. 176)

So a signal that would be strong enough to get our conscious attention in some situations may not be noticed consciously at other times. We may not notice the pain from an accident in the immediate aftermath as our focus is on responding to the crisis. We may not notice the smell of food burning in the kitchen, if we are absorbed in reading an engaging book.

This is a potentially important distinction as it shows that there are both (1) cognitive processes that are not accessible to consciousness and (2) others that are potentially accessible but require the executive to in effect ‘call’ for the data (Fig. 4.8). Part of the role of the ‘executive’ module is to direct attention and thus moderate awareness of sensory information:

Fig. 4.8
figure 8

The executive can select preconscious signals for conscious attention

Thus neural activity throughout the brain that is generated by input from the outside world may be differentially enhanced or suppressed, presumably from top-down signals emanating from integrative brain regions …. (D’Esposito, 2007, p. 17)

Recalling Experiences

As sensory information is filtered, interpreted and tidied up before reaching the executive levels of processing and so consciousness, there is a sense in which ideas based on current perceptions not only derive from current information from the external world but also draw upon features of the processing systems operating subliminally. To the extent that some of these features have themselves been shaped by previous experience, they can be considered to in a sense be a form of memory, albeit an automatic type of memory that operates without our awareness.

However, it is also the case that we have access to more explicit forms of memory. We are able to think about previous experiences and so have current mental experiences that do not directly relate to current perceptions of the external world. Such recollections can be triggered by something we currently perceive. We can also access memory without reference to current perceptions. So just as perception can act as a source of ideas, so can memory. The concept of memory is highly significant for a number of the key ideas in this book such as personal knowledge and learning and so is considered in more detail in the next chapter (Chap. 5).

Imagining Possibilities

It is also our experience that our thoughts are more than just perceptions and memories: as well as what seems to be now, and what we recollect was before, we can also think about the impossible, or the feared or wished-for, ‘as of one who in a vision sees what is to be, but is not’ (as Longfellow expresses it in his poem ‘The Song of Hiawatha’). So there can be a creative aspect to thinking, and ideas need not be direct reflections of our experiences.

Imagination is a human faculty and (just as perception and remembering) represents a processing facility of the human brain. Imagination works with/on existing resources: our perceptions of the world and our memories of previous experiences.

Philosophers have traditionally distinguished between ‘simple’ ideas and ‘complex’ ideas. Simple ideas were supposed to be directly derived from sensation. When combined, they can produce complex and abstract ideas far removed from sensory experience and expressed in shared language. Simple ideas are the ‘atoms’ of associationist accounts of the mind. (Gregory, 1987, p. 337)

Our perceptions are representations – of how the world seems to be – in the mind. Our memories are recollections of how aspects of the world previously seemed to be – so also representations in the mind. Although experience may seem holistic, it is based on a combination of both analytical and synthetic processes: analytical, as the sensory information detected by the eyes, ears, etc. are analysed and presented to the mind as the sound of an oboe, a moving car, the taste of spring onion, etc., not the patches of colour, coincidence of tones, etc. actually triggering the sensory cells; and synthetic, because we have a ‘multimedia’ experience of the world, visual and auditory information, for example, enter the brain by different ‘channels’ but lead to a unitary experience of a world where the sights, sounds, smells, etc. are associated.

Indeed, for rare individuals, synaesthesics, there are strong associations between sensory impressions that go far beyond what most people experience (e.g. particular numbers being associated with certain colours or sounds that evoke tastes) and which would seem to be suboptimal in forming mental models of the world because the association can seem to non-synaesthetes as arbitrary in relation to the objects stimulating the perceptions.

This blending of different kinds of input into a coherent perception of our surroundings clearly involves a lot of processing and suggests that the human nervous system has quite sophisticated apparatus for processing sensory information and constructing a multimedia impression of our immediate environment. Our ability to imagine things we have not experienced and even could not experience (because they do not reflect actual states of the external world and may even be physically impossible, i.e. they may not even reflect possible states of the external world) suggests that either the same or at least comparable processing apparatus is able to use the resources from which we construct our experience of the here and now and build alternative possibilities. The current view is that it seems likely that apparatus that evolved primarily to support perception has been recruited to allow imaginative thinking as well. This of course is highly relevant to science and science education, when considering the nature of thought experiments, for example (Brown, 1991). The processes by which imagination can produce novel ideas will be considered in more depth later in the book in the chapter on the learner’s thinking (Chap. 7).

Expressing Ideas

As the Oxford Companion to Philosophy points out, ideas ‘are entities that only exist as contents of some mind’ (Brown, 1995). That in no way suggests they are not important: in a very real sense, humans are just ‘such stuff as dreams are made on’ (as Shakespeare’s Prospero suggests in The Tempest). However, it is highly significant for research exploring student thinking that the ideas learners have are only directly available to them and have to be inferred indirectly by researchers.

Representing Ideas

If our ideas are purely personal, subjective experiences, then we cannot show them to anyone else. Rather we have to represent them in a public space if we wish to ‘share’ them with others. This representation can take a number of forms: we can express our ideas in prose or poems; we can draw out our ideas, act our ideas out or make films, compose music, compile photoessays or dance them out. The key point is that, whatever form representation takes, the representation is different from the original idea – the thought – and so the representation can never be a replica of the idea.

There is symmetry here with the situation described in terms of perception (above). The representations of the external world in the nervous system are necessarily different in form to the information reaching the senses: equally, the representations we make in the external world of our ideas are necessarily different in form to those ideas.

I have referred elsewhere to Popper’s use of the notion of three worlds: the physical world, the world of subjective experience and the world of ideas in the abstract, or Worlds 1, 2 and 3 (Popper, 1979; Taber, 2009b). These ‘Worlds’ contain fundamentally different types of things, and we can never move something from one of these worlds to another. So the abstract notion of a Platonic Form such as an ideal sphere only exists in World 3. A physical (World 1) object such as the sun or a ball bearing may be considered to approximate it, but even if it is considered ‘spherical’, it cannot be the same as the abstract notion of a sphere.

The human mind is capable of thinking about both actual objects that may be designated as spherical and about the geometrical form of the sphere in abstract, but in both cases these are mental representations (World 2 objects), so ‘the image of a Platonic Form that occurs in a person’s mind would be an idea in our sense’ (Brown, 1995, p. 389). This might be seen as a parallel to the distinction made in discussing language with metalanguage (e.g. between the word ‘banana’ and a banana): a word is not what it represents, and a sentence is not its meaning.

In this book I have not emphasised the notion of the different worlds but rather have stressed the assumption that conscious experience such as ‘having ideas’ is associated at the cognitive systems level with processing of information, which in turn is based on the properties of the physical (electrochemical) structure of the human brain. In the terms set out in the previous chapter, an account of the conscious experience such as ‘having ideas’ is a description at the mental level; an account of the processing of information is a description at the computational or system level; and an account in terms of the electrochemical structure of the human brain would be a description at the physical level. However, there is a close parallel here with the three Worlds formalism, with the mental description concerning World 2, the systems description representing an idealised account in World 3 and the physical description being about World 1.

The mental experiences we have are clearly different in form to the electrical patterns of activation in the brain that are associated with them. Although those patterns ‘translate’ (give rise) to ideas in terms of conscious experience, they are actually in the form of a ‘code’. That code also has to be translated into behaviour to represent what is coded in the external world, in the public space available to others. The representation formed in the external world is thus at least two stages removed from the idea experienced.

This is represented in Fig. 4.9, where the left-hand image offers a dualist interpretation (see Chap. 3), where ideas are translated into brain processing which leads to actions that can be examined by others in the external word (e.g. writing, speaking). In this volume it is considered more appropriate to consider mental notions such as ideas as being an emergent property of the brain. In this conceptualisation (see the right-hand image in Fig. 4.9) electrical activity in the brain is considered as the basis for mental experience: nonetheless, ideas are still two representational stages away from behaviour that reports those ideas.

Fig. 4.9
figure 9

Representations available to public scrutiny are indirect indicators of ideas

I stress this point as it is a fundamental assumption underpinning this book that because ideas can only be expressed by being represented in some other form than ‘thought’, we cannot assume that representations of ideas have true fidelity to the ideas being represented. Indeed, in principle, the representations cannot be ‘the same as’ what is being represented. This is indicated in Fig. 4.10, where the representation is in the external world and takes a physical form such as sound, inscriptions, bodily movement, etc., quite different from the nature of thought itself.

Fig. 4.10
figure 10

Representing ideas in the external world

Figure 4.10 shows that the processing of information in the ‘executive’ processing module that correlates with consciousness (having an idea) can lead to action in the external world to represent our thinking. Signals from the executive processor can activate areas of the brain concerned with the voluntary control of movement (the motor cortex), which then sends signals to the muscles to bring about speech, writing, gesturing, etc. There is in effect a ‘motor interface’ to represent electrical activity into action in the external world, just as there is a sensory interface to represent stimuli into electrical activity. As with the sensory interface (which includes the retinas, the mechanism of the ear, the olfactory membranes, etc.), the motor interface is not physically a single entity, but rather consists of various spatially distributed components. There are many places in the body where motor nerves stimulate different muscles.

So in representing an idea, the motor system is used to produce talk or to write, etc., and so the brain needs to bring about coordinated physical activity. This is something we generally take for granted, but clearly for some individuals, especially the young or those with particular disorders, producing coherent speech or readable handwriting is non-trivial. For many of us, expressing our ideas in some forms may be challenging. If I was asked to dance or paint a representation of electromagnetic induction or photosynthesis, then even assuming I was able to conceptualise how I wished to express my idea of the concept in that form, I would not be confident that I could execute the representation as I would intend. I might feel more confident in using speech or writing, but that may not apply to all young people we interrogate in our research. Clearly this is important in research if we assume that the representations that people make of their ideas are a good guide to their ideas. This is often recognised when the literacy skills of young children may limit their ability to read research instructions/questions or produce written answers, but in principle this is always an issue that limits what we can achieve in research into students’ ideas and so their understanding and their learning.

To some extent individuals can check upon the representations they make in the world, at least to the extent that they can monitor them through their own senses, that is, perception produced through processing in the cognitive system as outlined earlier. For example, we can monitor our own speech productions, and if we detect mistakes make a corrected utterance. However, ‘slips of the tongue’ may go unnoticed, and of course the mechanism by which we obtain sensory information about our own actions in the world is subject to the same processes of representation, filtering, interpreting and tidying that accompany any other perceptions, with the added complication that we have strong expectations of how our representations will be enacted in the world. To represent this, in Fig. 4.11, the percept is a limited representation of the enacted representation of the original idea, in part influenced by knowledge of what was being represented: this is the very knowledge which is absent when interpreting the public representations made by others. More permanent representations – writing, drawings, etc., may be easier to check in this way than more ephemeral actions such as gestures and speech that do not leave a permanent trace in the environment. But even here, the issues of interpretation are relevant. A person who produces an inscription that offers several readings because it can be readily understood in several ways is likely on reading back their own production (i.e. public representation) to interpret the writing according to the intended meaning and may well not detect any ambiguity.

Fig. 4.11
figure 11

Feedback to check on representation made in external world: does the percept of our representation match the idea we were trying to represent?

Accessing Another’s Ideas

The stubborn fact is that conscious experience or awareness is directly accessible only to the subject having that experience, and not to an external observer. (Libet, 1996, p. 97)

Ideas, subjective experiences of products of our own thinking, cannot then be made directly available to anyone else, but rather can only be represented (e.g. described) in the public space of the external world. Reporting an idea is necessarily to produce something of a different form than an idea. The key implication here for our present purposes is that researchers only have direct knowledge of the ideas produced by their own thinking. They can only infer the ideas of others indirectly – an indirect process that is informed by the researcher’s own implicit or explicit model(s) of what is involved.

This is shown in Fig. 4.12, where one individual, Jean, has an idea which he/she wishes to ‘share’ with another person, Lev. Jean can only do this by creating a representation of the idea in the public space they share. Lev can then form a percept of this representation, which can be used as the basis of attempting to reconstruct the idea. As a result of this, Lev produces an idea of what Jean is trying to communicate. Lev’s idea is not Jean’s idea, but an idea of what Jean’s idea might be.

Fig. 4.12
figure 12

‘Sharing’ ideas is an indirect process

Whether Lev’s idea of Jean’s idea could be considered similar enough to Jean’s idea to be considered in effect, the ‘same’ idea, a shared idea, is a challenging question – how could we ever be sure? However, there is clearly an important sense in which they cannot be the same idea.

In terms of the notion of there being three worlds (Popper, 1979), then Jean’s idea and Lev’s idea are ‘World 2’ objects. However, in a sense it is misleading to talk this way, as they are not part of the same World 2. The public space in Fig. 4.10 represents the physical world, World 1, in which Jean and Lev exist as corporeal beings. The idea that Jean had, shown to the left of the public space occurs in Jean’s subjective world (World 2J), and the idea that Lev forms, shown here to the right of the public space, occurs in his own subjective world (World 2L). There are, in effect, as many World 2s as there are creatures capable of subjective experience sharing the same physical World 1.

Research to Investigate Learners’ Ideas in Science

The processes represented in Fig. 4.10 apply whenever one person interprets another’s behaviour in order to understand their ideas, whether the person expressing the idea or the person attempting to understand it are students, teachers, researchers or simply two people having an everyday conversation in the workplace or shopping or at some leisure event.

However, in the case of researchers, there is likely to be at least one further step, as the researcher’s purpose is not just to develop a personal understanding of the informant’s ideas but also to develop public knowledge. (This in itself is a problematic notion, as will be explored in Chap. 10.) The researcher will therefore seek to report findings, potentially including accounts of informants’ ideas. At a minimum, then, the research process will include:

  • The informant expressing an idea by representing it in some form in the public space where the researcher can perceive the representation

  • The researcher attempting to develop a personal understanding of what the idea being expressed was;

  • and attempting to represent the researcher’s own idea (of the informant’s idea) – most commonly in the form of inscriptions in a research report submitted to a journal (Fig. 4.13)

    Fig. 4.13
    figure 13

    Research reporting a learner’s ideas offers accounts of second-hand interpretations of those ideas

If published, the ‘public knowledge’ is then represented in a paper, and of course readers then interpret, and develop their own ideas of, what is represented. We might say that readers form their own subjective understandings of the representation of the researcher’s understanding of the original informant’s idea, as mediated by the informant’s public representation of that idea!

Sometimes research reports may include ‘data’ which more closely relates to the informant’s original representation, for example, transcription of speech or handwriting, or a scanned copy of a diagram. Such data can sometimes offer close facsimiles of the original representation although not of the informant’s original idea, of course, allowing the reader to form their own alternative interpretation of the representation. However, conventions of, and constraints on, journals, and the practicalities of reading other people’s research, mean that most published research reports at best present a few selected samples of the raw data upon which the researcher based their interpretations.

This process is represented in Fig. 4.14, and of course the chain becomes further extended if the original reader of the research is a teacher educator who reports the research to teachers in training, or a researcher who reports the research as part a literature review for another study or an author who discusses the research in a research review or a textbook.

Fig. 4.14
figure 14

A reader of a research report forms an idea of the learner’s original idea based on a series of re-representations

Modelling Student Ideas

In this Chapter it has been argued that although we can reasonably assume that other people have ideas, much like us, we can never have direct access to anyone’s ideas but our own. It has been argued that although we can express our ideas by representing them in a public space in the external world in various forms, the representation is intrinsically very different from the idea itself and so cannot be assumed to be a perfect reflection of it. The ideas are correlates of electrical activity in the brain that can be reflected in various actions in the external world due to the body’s ‘motor interface’. There will be some skill involved in representing the idea through motor activity, including in speech production or writing.

Furthermore, the representation itself needs to be perceived by the person trying to understand the idea, and this involves the representation acting as a stimulus which is converted to electrical signals by the observer’s ‘sensory interface’, then processed in the brain to be presented to consciousness as perceptions. Those perceptions may not directly lead the observer to a candidate hypothesis for what the original idea might have been, and so further conscious analysis, reflection and interpretation may be required. The process of communicating ideas is always an indirect one. This has been stressed in such detail because in our everyday lives we take such communication for granted much of the time. Arguably, human beings have evolved to communicate well enough for most everyday purposes, and in normal interactions it makes sense to largely take the processes of communication for granted rather than to analyse them in the manner undertaken in this chapter. Misunderstandings occur, but generally we feel we are able to understand what others try to tell us or at least are usually aware when we do not understand.

An argument made in this book however is that whilst research may draw upon our everyday communicative skills, it should be a technical activity where we do not take everyday processes for granted. Everyday transactions would likely be unreasonably retarded by analysing them in technical terms, as has been attempted in this chapter. In effect, in everyday life, we commonly admit the occasional flawed interpretation as the cost of quick and easy communication ‘of ideas’.

Research, however, seeks to make knowledge claims that are robust and well evidenced, and a more deliberate consideration of the nature of the data and its interpretation is called for. The analysis offered here then provides an important background for thinking about the research process, as discussed in Parts III and IV of the book.