Keywords

This chapter was prepared for the volume that focuses on the 7 Cs of creativity. The 7 Cs are Creators, Creating, Cooperation, Context, Creations, Consumption, and Curricula. This chapter explores the various ways that creativity is expressed, and as such best fits into the Creations category. It distinguishes between creative potential and creative performance and covers the assessment of each. There is a discussion of objectivity and subjectivity in assessments, the pros and cons of the various assessments, and criticism of some of the research being reported. As is the case with the other chapters in this same volume, a brief historical overview is given first. That is followed by a review of the relevant theories and key empirical results. The final section of this chapter pinpoints debates and concerns and offers suggestions about advancing the field.

Several key terms and concepts should be defined right up front. As a matter of fact something must be said about the word creativity. Elsewhere I have proposed that the noun creativity should only be used sparingly, or better yet not used at all, at least in the academic literature. That is because there are so many different kinds of creativity. It can be expressed in a multitude of ways—and these are not always all that strongly related to one another. There are domain differences in creativity (Agnoli et al., 2016; An & Runco, 2016; Baer, 1998), for example, indicating that creative performance in the arts may differ from creative performance in the sciences, mathematics, drama, dance, and so on. There is no consensus about which domains are in fact distinct, nor even agreement on which criteria should be used to confirm distinctiveness among them. Gardner’s (1993) list of criteria (e.g., biological bases, experimental evidence, and developmental trajectories) remains the most tenable. New domains have been proposed, including naturalistic creativity, technological creativity, and everyday creativity (Cropley, 1990; Richards, 2007; Runco & Bahleda, 1986). Some of these are only distinct if creative potential is recognized as meaningful and extricable from actual creative performance.

This brings us to the next key definition, or definitions, namely between creative potential and actual creative performance. No measurement can be done without recognizing the separation of the two. This dichotomy also supports the idea that the noun creativity is just too vague to use. Many people have creative potential, but not everyone performs in a manifestly and unambiguously creative fashion. Further, creative potential can be assessed (details below) but that may tell you little about creative performance, just as creative performances can be measured but tell you little if anything about the full range of any individual’s creative potential. With this in mind Runco (2007) restructured the classic 4P framework (as did Lubart et al., in the Introduction to the present volume) in order to clearly distinguish creative potential from creative performance. Runco (2007) took the original 4Ps—person, press (or place), process, and product—and subsumed each to either potential or performance. The most useful part of this reorganization was that it allowed all 4Ps to be functionally tied to one another. Personality and Press factors, for example, were placed under Creative Potential because a person can have critical tendencies, such as openness to experience, risk tolerance, wide interests, intrinsic motivation, or any of the other core characteristic of the creative personality, but still not actually perform in a creative fashion. The characteristics are indicative of mere potential. The same is true of “press” or place factors. (“Press” is the term from psychology of the 1940s and 1950s and was used to label given to pressures on behavior. Some of these, called beta press factors, depend on the individual’s interpretation. More recent theories tend to refer to “places,” or settings, contexts, or environmental factors rather than “press.”) These too do not guarantee actual creative performance. Creative products, also in the 4P model, are under the Creative Performance category of the new framework presented by Runco (2007). After all, if there is a product, there has been an actual performance, at least in the sense that there is a manifest result or outcome. The distinction between potential and performance is vital for understanding the various assessments and will be used throughout this chapter. That distinction is also important for Creations because sometimes there is a clear outcome of creative efforts (e.g., a product or performance) but other times the creation is a new understanding, an idea, or even an emotion. These are related to creative potential but are unlikely to be viewed as socially recognized performances.

While on the topic of terminology, something more should be said about the title of this chapter. It is a bit deceiving to use the noun, creativity, given what was just said, but then again, using the noun provided the opportunity to dive into the distinction of potential and performance! The second part of the title that needs explanation is the word, “type.” Carl Jung (1923) went into great deal about psychological types and influences. Various measurement efforts, including the Myers-Briggs Type Indicator (MBTI; Myers et al., 1998) target types. The MBTI focuses on Thinking vs. Feeling and Intuition vs. Sensation. Isaksen et al. (2003) offered a history of Jungian types as well as empirical data relating type to cognitive styles. The title of the present chapter, “Types of Creativity,” is being used in part because Type has such broad application, having been related to attitude, perceptual tendencies, the intentional direction of one’s energies, and even ego function (see Isaksen et al., 2003). This broad definition is useful, given the common view that creativity is itself a complex or syndrome (MacKinnon, 1965; Mumford & Gustafson, 1988). In the present chapter the different kinds of creativity are designated by the accompanying nouns, including potential, performance, personality, product, and so on. Each of these is in a sense a distinct type of creativity and reasonable expectations depend on the type.

1 A Brief History

The work of Jung (1923) provides a nice segue to the brief history, promised above. Research in that same direction (e.g., personality and characteristics) dominated the earlier investigations of creativity in the 1950s. Much of it was conducted at IPAR, in Berkeley, California (Barron, 1955, 1995; Helson, 1999; MacKinnon, 1962, 1965), and most used one or another personality assessment. This early work found creative individuals to stand out in those core characteristics mentioned above (intrinsic motivation, risk tolerance, and so on). It also confirmed the existence of domain differences, though the domains studied were somewhat limited (e.g., architecture, mathematics, writing). There was research on domain differences in the 1930s (Patrick, 1935, 1937, 1938) but it was not often cited in the psychological literature of the 1940s, 1950s, and 1960s. There was in fact very little on creativity until IPAR. The IPAR researchers brought respectability to creativity research, as did J. P. Guilford (1950).

Guilford (1950) lamented the lack of attention given to creativity and reported statistics from the psychological literature showing that it was not often studied. Guilford also introduced the idea that creative potential represents a critical natural resource, an idea taken up again in the 1990s and used in economic and investment theories of creativity (Rubenson, 1991; Rubenson & Runco, 1992; Sternberg & Lubart, 1995). These theories characterize creative potential as a form of human capital and a highly valuable asset. Guilford’s major contribution was probably the distinction between divergent production (now usually called divergent thinking [DT]) and convergent production (CT), and the methodologies he developed for the measurement of each. DT and CT were parts of Guilford’s Structure of Intellect (SOI) model. It posited 180 distinct kinds of thinking! The SOI thus represents one of the extreme views of how human cognition can be delineated. At the other extreme is the theory that there is one general form of human cognition (“g”) that influences all thinking. Guilford’s theory was not without critics. Far from it. His research was hotly criticized, especially because his method for isolating factors (the “cells” in his SOI model) were fairly subjective. The SOI is no longer widely used (cf. Bachelor & Michael, 1997; Michael & Bachelor, 1990; Mumford, 2001; Runco, 2001). Guilford’s distinction between divergent thinking and convergent thinking, on the other hand, remains enormously popular (see Acar & Runco, 2019). Educators and researchers who wanted a reliable method for estimating creative potential saw divergent thinking in particular as filling a gap, and no doubt for that reason divergent thinking tests remain among the most commonly used measures in the creativity research.

Divergent thinking tests are often used as estimates of creative potential because, unlike most convergent thinking tests (e.g., IQ and most academic tests), they allow original thinking. Divergent thinking tests are open-ended (e.g., “name all of the strong things you can think of”), and respondents can generate a large number of ideas (“ideational fluency”) using a variety of conceptual categories (“ideational flexibility”). Some of the ideas might be statistically rare or novel (indicative of “ideational originality”). The use of divergent thinking tests for estimates of creative potential is, then, justified in part by their allowing originality. Originality is absolutely key for all creative performances. The use of divergent thinking tests is additionally justified by their reliability (Guilford, 1968; Runco, 1991, 2013; Torrance, 1995). There is also some indication of long-term predictive validity (Runco et al., 2011; Torrance, 1995), at least with certain criteria of creative achievement.

Tests of divergent thinking have been used in research suggesting that there is a 4th grade slump in creative potential, as well as an old-age rigidity of thought (i.e., a loss of flexibility) (Chown, 1961; Torrance, 1972; Rubenson & Runco, 1995). Divergent thinking tests have been used to test the impact of various educational and enhancement efforts (Plucker et al., 2011). They have been adapted such that they offer information about the potential for problem finding (Alabbasi et al., 2020; Hu et al., 2010) and have also been used in scores of investigations intended to identify which attitudes, values, and traits are shared by creative individuals (Albert & Runco, 1989; Basadur, 1994; Basadur & Hausdorf, 1996). Some of the best test of divergent thinking present realistic problem situations instead of something simple and abstract. Instead of “name strong things” examinees might be given a problem from their own lives but asked to generate a range of options. These options are then evaluated for fluency, flexibility, and originality. Divergent thinking tests are also being used in very recent research on brain correlates of creative thinking (Weisberg, 2013; Yoruk & Runco, 2014).

Divergent thinking tests have been used with a very large range of samples, including preschool children (Bijvoet-van den Berg & Hoicka, 2014; Moran et al., 1983; Tegano & Moran, 1989), primary school children (Runco, 1986a; Torrance, 1995), college students (Runco et al., 2006), and older adults (Gott, 1992). Divergent thinking tests have also been used in exceptional samples, such as entrepreneurs (Ames & Runco, 2005). Such a wide range of samples will come as no surprise, at least if creative potentials are assumed to be widely distributed.

There is a question about using divergent thinking tests with productive adults. That is because productive adults are by definition involved in actual performances or perhaps creating some sort of product or artifact, and instead of assessing the creative potential that is estimated with a divergent thinking test, or any paper-and-pencil test for that matter, it is quite possible to look instead to the products and actual performances of the adults.

Before elaborating on the possibility of assessing actual products instead of creative potential, something should be said about the method just mentioned and the phrase, “paper and pencil tests.” This is dated; many divergent thinking tests are now given digitally, no paper in sight (e.g., Beketayev & Runco, 2016; Cheung & Lau, 2010). Apparently there are differences between digital and paper-and-pencil administrations (Guo, 2016), which should come as no surprise given how much evidence there is for divergent thinking tests to be collected under just the right conditions. If those conditions (e.g., liberal time allotment, de-emphasis on testing and consequences, explicitly directing examinees away from typical test-like expectations) are not met, people are not very original, even if the task is open-ended. This is a significant concern, given how much research is done digitally these days and given the possibility that it may not really be telling us anything about creative potential. I will return to this concern in the Conclusions section of this chapter, but for now the point is that divergent thinking tests are not always given in paper-and-pencil format.

A second point to emphasize about divergent thinking is that it often collaborates with convergent processes (Basadur, 1994; Lubart et al., 2013; Runco, 2003; Runco & Vega, 1990; Runco & Chand, 1995). In fact, actual creative performances and achievements certainly depend on various things, including knowledge, analytic thinking, judgment, and motivation. This kind of collaboration is suggested by the two-tier model of the creative process (Runco & Chand, 1995) which has problem finding, ideation (divergent thinking), and evaluation on the primary tier and motivation (both intrinsic and extrinsic) and knowledge (both conceptual and procedural) on a secondary tier. Lubart et al. (2013) were equally explicit about the role of convergent processes in their work on an instrument called the Evaluation of Creative Potential (EpoC). This has shown great promise and is reviewed later in this chapter, under the Creative Products section. It is mentioned here just because of its recognition of both divergent (and exploratory) and convergent processes.

2 Four Creative Populations

Decisions about what type of Creations can be reasonably expected depend a great deal on the individual. There are at least four populations (i.e., groups of individuals) that should be recognized. First there are children who have creative potential but are not yet producing artifacts that are socially-recognized as creative. Their originality might be entirely personal (Runco, 1996), and in fact their behavior or ideas only original against their own previous actions. In other words a child might show originality by doing something new and novel for him- or herself, in which case it is original even if it not original against any social norms or standards. Keep in mind here (a) that this is just originality and not creativity—unless the new idea or behavior is also effective, in which case the standard definition (Runco & Jaeger, 2012) applies and the label “creative” is appropriate; (b) all creative achievement, even that at the highest level, starts with and therefore depends on this same kind of personal creative potential, though eminent creativity requires various other things, such as persistence, knowledge, and social recognition (Runco, 1995); and (c) the child just described is only displaying creative potential. Divergent thinking tests are fitting assessments for children’s original thinking and their creative potential. It is most accurate to say that divergent thinking tests are useful estimates of the potential for creative thinking. They are estimates because they are imperfect, which is true of all tests and measures. Hence the need to calculate reliabilities and validities. This is best done each and every time a test is given.

A second population represents adults who express their creative talents, but only in personal ways, such as self-enlightenment, self-actualization, and everyday or mundane creative thinking and behavior (Kinney et al., 2012; Runco et al., 1991; Tan, 2016). They do original and effective things, but they may not produce artifacts that are socially recognized as creative. Adults who do produce such artifacts represent the third population of creative individuals. They start new businesses, patent an original and useful device, publish, or do something that is socially-recognized as “creative.” The study of entrepreneurs, cited above, represents research with this population. They are creative in a socially-recognized and mature fashion, but they are not eminently creative. The eminently creative represent the fourth population. Not only have they produced something that is socially-recognized as creative; they have changed the way others think and their work has stands the test of time (Albert, 1975). Simonton (1988) has written extensively about the eminently creative and described how they are persuasive—a nice label for how eminently creativity persons change the way others think. Persuasion also represents a 6th P, to go along with Potential, Personality, Process, Product, and Place (Rhodes, 1961; Runco, 2007).

The four groups just identified (children with creative potential, self-actualized adults and those involved in everyday creative actions, creatively productive adults, and the eminently creative) are each well-represented in creativity research, and they can be easily distinguished from one another, as I have tried to do in the paragraph above. Everyday creativity is probably the least well represented in the research, no doubt because of the difficulties in operationalizing criteria, which is why the work of Cropley (1990) and Kinney et al. (2012) is so welcome. More on this below.

Assessments can be chosen such that they fit the particular population. Generalizations from one population to the other should be avoided, though care must be taken such that the groups are not presumed to be completely and permanently distinct. Children with creative potential can, after all, develop their talents and become creatively productive adults or even eminent creators. As a matter of fact, that is probably the ideal outcome of education and the creativity research—to fulfill creative potentials, thereby improving the quality of life of the individual (through self-actualization) and to the benefit of society.

Care must also be taken because too often eminent creativity is completely separated from the other types of creativity. This probably applies to Ghiselin’s (1963) attempt to identify criteria for different levels of creativity, and it certainly is apparent whenever Big C creativity is distinguished from little c creativity. When totally extricated from one another, the absolutely important connection between the two is forgotten. This is why Runco (2014) called the Big C/ little c distinction a false dichotomy; he emphasized the connection between little c (potential) and its eventual expression in mature or even world-shaking creations. In other words, professional and eminent creativity depend on personal creativity. All manifest creative behavior, including all adult, mature, eminent, or in any way socially-recognized, begins as creative potential.

3 Creative Products

As noted above, sometimes there is a possibility of collecting data about actual creative products and performances instead of estimating creative potential from divergent thinking or perhaps a personality test. Certainly there are a number of good performance measures and a handful of useful methods for assessing productivity. Inventions have been counted and examined (Huber, 1998; Simonton, 2012), as have works of art, publications, patents, scores, and so on (see Lindauer, 1990; Simonton, 1984). With certain samples products such as collages, poems, and works of art can be elicited and assessed (Amabile, 1982; Hennessey, 1994; Lubart et al., 2013; Runco et al., 1994). Amabile’s Consensual Assessment Technique (CAT) has been proven to be reliable with various populations, including the first two mentioned above, children and non-eminent adults. The CAT requires that judges are involved in the domain being judged (e.g., instructors of poetry judge poems), but intriguingly, creativity is not defined for the judges; they are to use their own implicit definitions. They are also asked to rate the technical skill of the product being judged.

Very often overlooked is that the CAT was not developed to assess individual differences in creative ability. Instead it was developed to determine if creative expression varied among different experimental and control conditions. Reliabilities of the CAT are quite good across a wide range of samples and media (Amabile, 1982). The CAT does not lend itself to broad comparisons because the scoring is done with reference to the sample at hand. There are also differences among ratings of the judgments obtained from judges representing different levels of experience or backgrounds. Runco (1989), for example, found that professional artists disagreed with art teachers and art students. Hence generalizations from one group of judges to another is not warranted.

Lubart et al.’s (2013) EPoC, mentioned briefly above, is also a domain-specific assessment of products. It requires that children produce something within (a) graphic or artistic, (b) verbal or literary, and (c) social problem solving domains. It allows assessment of divergent-exploratory thinking and convergent-integrative thinking. Because different domains are represented in this assessment, and there are different indices of creative potential, Lubart et al. are able to profile each student and identify strengths and weaknesses. This leads directly to recommended experiences and curriculum.

Standing back, there are two questions for all assessments of creative products. First, who is to judge the products? And, as Murray (1959) asked long ago, who is to judge the judges? It might be put this way: whenever judgment is involved in assessment, there is subjectivity. It would be quite unwise to take any assessment or research seriously if judgment is involved and no index of inter-judge reliability is given (or if it is low). Further, good inter-judge reliability is really just one check of the value and meaning of any assessment. Inter-judge reliability only provides some index of the degree to which judges agree. There are different reasons why they might agree, and not all of them confirm the meaning of the assessment (e.g., halo effects, expectancy effects, or agreement based on appeal rather than creativity).

A second question about actual products is, what are you trying to understand? Why are you doing the assessment? Often the interest is in potential, which means that the concern may be about the future and how the individual might perform at a later time. Any educational program will be interested in potential, for example, as will efforts to encourage or train creative thinking. There is always risk when assessing potential. There is some uncertainty, precisely because potential is inferred and is more ambiguous than actual performances. Recall here that potential is not all that highly correlated with performance. Obviously the person who has performed in a creative fashion has potential, and has used it, but that does not necessarily mean that he or she will continue performing regularly in the same fashion. As evidenced by one-hit wonders (Kozbelt, 2008) or Shakespeare’s last two sonnets (which tend to be excluded from collections of quality works; Simonton, 2012), the past is no guarantee of the future.

Sometimes the interest is in historically important creators, in which case it is entirely appropriate to examine products, be they inventions, publications, works of art, or the like. Sometimes the interest is in socially-recognized creativity, and here again, performances and products would be best for this kind of work. As a matter of fact, a case has been made for avoiding all tests. This point of view is based on (a) the idea that all tests are samples of behavior and artificial (i.e., not indicative of what occurs in the natural environment), and (b) the notion that, if it is possible to examine people who are unambiguously creative, there simply is no need for the estimate that is provided by a test.

The first of these ideas (a) is reasonable and is a useful reminder that good tests are representative samples. Short or over-constrained tests are not good samples and not representative of naturally-occurring creative behavior. The dismissal of tests does ignore the fact that the predictive validity of tests can be assessed. Such validation provides precise information about how indicative the tested sample is of naturally-occurring creative behavior—at least if the criterion is indicative of what occurs in the natural environment. The second idea (b) is also reasonable, but it ignores the usefulness of estimating potential. Consider the interest in the creative potential of children. They have yet to prove themselves creatively in a socially-meaningful way (e.g., publishing a novel) but it is informative to know if they have potential. Looking to unambiguously creative products is similarly irrelevant to the everyday creativity (which does not lead to a product or a socially-shared activity) mentioned earlier.

These views have been debated for many years. Shapiro (1970) and Taylor (1964; Taylor & Holland, 1962), for example, went into great detail about the criterion problem. This problem is a result of the fact that, whenever you have a predictor of creativity, you can only be sure it is a good one if you assess its predictive validity, and that requires a valid criterion. In fact, psychometric textbooks often describe predictive validity as a special kind of criterion-related validity. But how do you validate the criterion? You need another valid measure—in effect, another criterion! Sometimes it is also reasonable to ask why you need a predictor at all if you have a valid criterion. keep in mind what was said about the value of studying creative potential.

Hocevar and Bachelor (1989) recognized the criterion problem in their review of creativity assessment and asked, “why not go directly to the criteria that have face validity? This can best be accomplished through studying eminent individuals, evaluating creative products, or using an inventory of creative activities and accomplishments” (1989, p. 63). Sometimes unambiguously creative individuals can be evaluated, but sometimes they cannot, and often, as is the case with educational efforts, there is more of an interest in potential than unambiguously creative performances and individuals.

4 Creative Achievement

Ludwig (1992) developed the Creative Achievement Scales (CAS) to assess eminent, unambiguously creative individuals. As is often the case with eminent creators, tests cannot be administered and the only way to measure creative talent is biographically. The CAS uses various biographical data and provides ratings of the individual’s personality, process of work, and lifetime productivity, each on a scale recognizing minor, intermediate, and major contributions. It has good inter-rater reliability. Ludwig (1992) used the CAS with over 1000 individuals and reported some of the clearest findings available on domain differences, psychopathology and creativity, and the correlation with background variables (e.g., family).

Kinney et al. (2012) developed the Lifetime Creativity Scales (LCS). The LCS represents a unique approach in that the intent is to measure the quantity and quality of creative accomplishments taking into account the entire adult lifetime. By looking across the individual’s lifetime Kinney et al. are able to identify peak levels of creativity, as well as the continued efforts throughout the lifetime (or what they call the “pervasiveness of creative activity”). The LCS focus on “creative outcomes (that is, on products, behaviors, or major ideas that have been communicated to other people) and take into account both vocational and avocational activities.” Examples of moderate creative activity include the following: “a person: (a) paints an original landscape; (b) improvises a beautiful new song; (c) writes an original and entertaining story enjoyed by friends; (d) helps a neighbor find new and effective solutions to personal problems; (e) makes up a series of novel games which excite and entertain children; (f) makes original modifications to recipes that greatly improve the taste and appearance of dishes; or (g) designs and builds original customized and functional furniture.” Unlike many approaches, such as the CAT, the LCS require that the examiner is extensive trained.

A related approach also asks about vocational and avocational creative activity and recognizes difference domains. Unlike the CAS and the LCS, Creative Activity and Achievement Check lists (CAAC) use self-reported data. This method was developed decades ago, when there was serious concern over the discriminant validity of creativity. In other words, there was uncertainty about the separation of creative ability vs. “g” and academic skills. The seminal work of Wallach and Kogan (1965) and Wallach and Wing (1969) put this question to rest, the latter doing so in part by adapting Holland’s (1961) CAAC for students. Holland (1961, 1965) himself had used a CAAC to demonstrate that academic achievement (e.g., winning academic awards) was unrelated to extracurricular creative achievement. As the same implies, this is one objective of the CAAC—to assess creative achievements that occur in the natural environment and are not required by school. In particular, these creative achievements (and activities, such as designing one’s own scientific apparatus, or writing poetry) are not assigned by teachers nor required in any way. They reflect choices made by the child or student him- or herself. Milgram and Hong (1999) also reported convincing data about such discretionary, intrinsically-motivated creativity, with their own version of a CAAC. Milgram and Hong were interested in what individuals do during their leisure time. They reported that creative activities and achievements done outside of school, during leisure time, are highly predictive of later adult creative achievements.

The CAAC allows different domains to be assessed. Traditional domains (mathematics, writing, drama, dance, leadership, art, crafts, music) are often included, and recent efforts have also included Technology, Moral creativity, Political creativity, and Everyday creativity. One version of the CAAC was developed for college students and included architecture, engineering, and biology. The CAAC is almost always a self-report, which does imply that various measurement concerns (i.e., memory, honesty, socially desirable responding) are relevant. Runco et al. (1990) found good reliability in a sample of mothers who evaluated their children with a special version of the CAAC. Paek (in press) summarized all research done using the various CAACs.

Runco (1986a) had both Quantity of activities and Quality of achievement scores in his version of the CAAC. He reported canonical and bivariate correlational analyses that showed that certain domains (e.g., writing) were more highly associated with divergent thinking than other domains (e.g., music). He also found that the Quantity CAAC scores were more highly related to divergent thinking than the Quality CAAC scores. The Quantity and Quality CAAC scores were far from redundant and not highly correlated with one another. Carson et al.’s (2005) Creative Achievement Questionnaire also recognizes the distinction between performance quantity and quality. The CAQ is like the CAAC in that it allows domain-specific assessment. It assumes that broadly socially-recognized achievements are the most creative and is weighted accordingly in CAQ scores.

There is debate about the relationship of quality and quantity within creativity assessments. Any quality score requires a judgment, and as noted above, judgments open the door to subjectivity. This is why there is frequently such poor agreement among different groups of judges (Runco et al., 1994). There is some reason to think that quantity is strongly associated with quality (Simonton, 1984), at least on a behavioral level (which includes products), though it certainly makes little sense on the level of ideas. Divergent thinking tests use the labels fluency for quantity and originality for quality, and often the two are highly correlated—but not always! The separation of quality and quantity is evidenced by the fact that the unique variance for originality is reliable, at least in certain samples, even with the variance attributed to fluency is removed (Maio et al., 2020; Runco & Albert, 1985). At least as convincing, experimental evidence using explicit instructions has demonstrated that originality can be manipulated without changing fluency, which would be impossible if they were interdependent. In fact, originality can increase while fluency decreases (Runco, 1986b). Then there is the theoretical separation of originality and fluency. Simply put, creativity theory gives great weight to originality. Virtually every definition of creativity includes originality. Quantity is not a part of those definitions.

5 Brain and Neuroimaging Studies

The most important questions in creativity research concern the mechanisms involved. How do creative ideas and insights come about? The answer to this question is by far the most likely to be provided by the neurosciences. Fortunately, the biggest increase in the creativity research is probably that which is focused on the brain. A large number of fMRI studies have been reported, for example, with interesting results. Unfortunately, many results are questionable. Reviews of the neuroscientific research on creativity have been quite critical of the underlying theories, as well as the methods used (Dietrich, 2007; Weisberg, 2013; Yoruk & Runco, 2014). Dietrich (2007), for example, pinpointed four key problems. One has already been covered in the present chapter: divergent thinking is not synonymous with creativity. Dietrich sees this as a huge problem, as do I, but dozens of neuroscientific studies refer to divergent thinking “creativity tests” and collect only divergent thinking data. Divergent thinking tests, when administered and scored correctly (see Runco, 1991, 2013), offer useful information about creative potential, but to really understand creative potential, more than divergent thinking scores would be needed. In addition, divergent thinking tests are predictors, not criteria. As Wallach (1970) explained 50 years ago, a predictor is one thing, a criterion something else altogether. Mistaking divergent thinking tests for criteria of creativity may also explain why so often they are called tests of creativity. At the risk of being redundant, divergent thinking tests are useful estimates of the potential for creative thinking.

Dietrich’s (2007) second criticism was that creative processes are too often assigned to the right hemisphere. This assignment no doubt resulted from the fascinating work on “split brains” and commissurotomies (see Hoppe & Kyle, 1990), but investigations of creativity within hemisphere have not been rigorous, and the theories cited to support that assignment misguided, to say the least. Further, Dietrich’s criticism is really more general and concerns localization; any attempt to use one part of the brain to explain creativity indicates a misunderstanding of creativity. Dietrich referred to research finding hints of creativity in all kinds of different locations, including the prefrontal cortex (which is probably the most popular location at present), visual cortex, hippocampus, amygdala, cerebellum, and even the basal ganglia. He tied this kind of thinking to the claims that “creative individuals use more of their brains; their brains are more efficient (whatever that means); they have more dopamine receptors, or more neurons, or those little nerve cells are more densely packed. The list of platitudes is practically endless” (p. 24). Any assignment of location assumes one location, which makes no sense, given the way the brain works (it uses systems and networks) and given what is required for creativity. As noted earlier, creativity is a syndrome, or complex (MacKinnon, 1965; Mumford & Gustafson, 1988). Creativity is not, in Dietrich’s terms, “monolithic”. This conclusion is entirely consistent with a theme of the present chapter, that there are different types of creativity, some indicating creative potential, others actual performance. Some can be expected of children, others only seen in adults. Everyday creative behavior is one thing, eminent creative achievement something else.

Dietrich (2007) next questioned the neuroscientific research emphasizing de-focused attention for creative thinking. Here again the problem is really just simplification. Dietrich did not entirely dismiss de-focused attention, no doubt because in some instances broad attentional horizons do facilitate original thinking. His point was that sometimes focused attention plays a role in creative thinking; it is not always de-focused attention. Along the same lines, and Dietrich’s fourth criticism, was that the creative process does not depend on an altered state of consciousness, nor on mood disorders or some other tendency towards psychopathology. As Dietrich described it, there are many more creative insights among individuals who are not in altered states nor experiencing psychopathology than those who are in an altered state or psychopathological.

These are worthy concerns, and I certainly agree that the neurosciences need to do a better job of looking to sound theory. Dietrich nodded to the cognitive sciences, which makes an enormous amount of sense, particularly given the need to identify underlying mechanisms. I would add that not only should the neuroscientific approach to creativity look more carefully at the research on creative cognition; it should also look much more carefully at the broader creativity literature, and in particular at the 60 years of research on the assessment of creativity. Sadly, important lessons are being ignored in fMRI studies, bringing many, perhaps even most, of the recent findings into question. Some fMRI studies are making the same mistake made in the early 1960s (e.g., Getzels & Jackson, 1962), where creativity was viewed as just another intellectual skill and therefore creativity tests are administered just like other kinds of tests.

In addition to the fact that earlier lessons are being ignored is a problem arising because fMRI research tends to require short testing times. That means that the sample of behavior (e.g., the test outcome) is not indicative of authentic, spontaneous creativity, like that which occurs in the natural environment. More broadly, fMRI research requires that tests of creative thinking are given under controlled environments (inside of the apparatus), and creative ideas suffer from precisely this kind of control. Since Wallach and Kogan (1965), divergent thinking testing has required that tasks be given as games and time de-emphasized (i.e., not mentioned) because it is such a distraction to examinees. Individuals taking a test of divergent thinking tend to be less original if they think the tasks are just like any other test. It is best to be quite explicit in the testing setting that the divergent thinking tasks are not tests. Otherwise respondents too easily jump to a test-taking mode of thought and think about time and spelling and points and grades and only conventionally-correct answers. That mode of thought needs to be avoided, which means that tasks should be administered only when examinee expectations about the tests are directed away from tests. Wallach and Kogan (1965) found that students who were unoriginal when they received divergent thinking tasks under test-like conditions became much more original when they received the same tasks under non-test-like conditions.

Just above I pointed out that understanding the mechanism underlying the creative process will depend on the neurosciences. It should be clear at this point that such understanding will also depend on the cognitive sciences, and, given the need for empirical work, on psychometrics as well. An inter-disciplinary collaboration is vital. If the neurosciences do look more carefully both at the cognitive sciences, as well as at the decades of research on how to assess the creative process in a meaningful manner, great strides are likely. Progress is especially likely if two of the problems with the fMRI research are avoided. These can be summarized as follows:

  • Too often creativity is assessed with a single item measure, or a measure with very few items. Psychometric theory is quite clear that good tests are based on representative samples. If an assessment has one item, it is a pathetically small sample. Admittedly, when asking examinees to sit in an fMRI apparatus, there may be a need to collect the data in a very short period of time. Unfortunately this means that the data are not representative of authentic creative behavior as it has been described in the research for the last 60 years. It means that the sample of responses collected by the brief test are not representative of what the individual could do. The situation is too highly controlled to generalize to the spontaneous, intrinsically motivated creative behavior that is really of interest.

  • The related problem is that timed assessment of creativity interfere with authentic creative expression. All too often, in controlled research or testing, examinees are given two minutes or some similarly brief amount of time to “perform.“ As noted above, this is contrary to research showing that examines are not original when timed, and in fact just the mention of time may put them in a test-taking mode or direct their thinking to extrinsic factors, both of which will inhibit creative thinking. Originality flourishes in permissive, game-like environments and it may take time to develop (Mednick, 1962; Paek et al., 2021). So again, results from the research with short (e.g., 1, 2, or 3-items measures) or timed-tests are not really telling us much about authentic creative talents. They do not use an adequate sample of behavior.

These criticisms have been leveled at various neuroimaging projects, and the rebuttal has been, “the creativity tests are reliable.” That may be true, but reliability is only one requirement for a meaningful assessment. Meaningful assessments are also in some way valid—and they are meaningful with respect to what is known about the creative process. Consider the theory of remote associates. This predicts that original ideas are often remote—they are far removed from the initial idea or problem. Time is needed to get to those remote ideas (Paek et al., 2021). An assessment that gives a divergent thinking test with a 2 or 3 min time limit may provide reliable scores, but who knows what the participants would have been capable of if the testing conditions were more supportive of creative thinking? What if those same participants had been given 10 min, or better yet, no time limits? They would not have been distracted by time; they would not be led to believe that they were being tested (and as such should be conventional); and the work on test-like conditions suggests that many people who are not original with a time limit can be original without time limits. So again, creative potentials are not well assessed with short, timed-tests, and a test can give reliable scores and yet say nothing about the creative process.

Some fMRI research stands out because the creative process seems to be unconstrained enough to be authentic. This is the work of Limb (Barrett & Limb 2020; also see National Endowment for the Arts, 2015). In this work jazz and rap musicians are positioned, one at a time, in an fMRI apparatus and asked to play something overlearned, by memory. They were then asked to improvise. The differences in the fMRIs were quite obvious. Limb does not point to any one brain location, either. Neuroanatomical circuits and networks are involved in improvisation. The fact that there was a rote experimental condition against which the improvisation could be compare suggests that, even though the musicians were in the fMRI apparatus, they were able to tap authentic creative processes.

6 Concerns and Conclusions

This chapter draws from psychometric theory as well as the creativity research. It pinpoints questions that must be asked when empirical research on Creations is conducted. These include, which population is being sampled? What is the focus, creative potential or actual creative performance? Caveats are also covered, the broadest concerning generalizations. Simply put, if creative potential is assessed, generalizations to actual creative performance are only as good as the reliability and predictive validity of the particular measure. Conversely, if performance is assessed, perhaps with one of the product methods or the CAS, LSC, or CAAC, the data are postdictive rather than predictive, and again, generalizations are often not warranted.

This is not to say that creativity assessment is impossible or worthless. Far from it. Good thing, because just about all creativity research depends on good measurement! There are caveats and precautions to be taken, but there are also quite a few good measures, and the good ones provide useful information. None is a test of creativity. There is no such thing. But there are reliable measures of creative potential, good methods for evaluating products, and sound measures of (past) creative performances. None alone tells the whole story, but each provides useful information.

I started this chapter by explaining the title. This allowed me to offer a definition of creativity and led nicely to the distinction of creative potential and creative performance. I will close now by using that same distinction, but this time I will refer to the title of the volume, Homo Creativus. That title suggests that humans are creative. It is in fact a part of our being, a part of our nature, a reflection of our genetic make-up. Although the present chapter identified important distinctions among types of creativity, there is also a creative universal. I am referring to creative potential. This potential may be expressed in different ways, which is why there are different types of creative performance, and why eras and cultures differ, but there is a universal as well. I am confident that volumes such as the present one will help to advance our understanding of these creative potentials such that they are fulfilled. Each of our lives will be richer if we do.