This paper consists of two parts. In the first part, I argue that the use of the concept of randomness in psychology and cognitive science has led to a number of problems which cannot be resolved within the current conceptual framework which treats randomness as an objective property of physical systems. Rather, randomness represents a psychological response to the state of high complexity. This is demonstrated through a number of examples which show that both scientists and observers conceptualize randomness in terms of complexity. In the second part, I propose that randomness should be replaced by complexity in psychological discourse and briefly discuss advantages of such a conceptual shift.

1 The trouble with randomness

1.1 Randomness: an elusive ideal

Randomness plays an important role in a number of theoretical and applied contexts, from experimental design and quantum mechanics to games of chance (Noether 1987). A positive conceptualization of randomness allows scientists to treat noise as a distinct quantity and remove bias. Yet, the debate on the use of randomness in science and psychology has been characterized by a lack of understanding of its meaning and its manifestations. This could be because randomness represents a negative abstract idealization that is completely inaccessible to the human mind. By its very definition, it lies outside the domain of the knowable. It is the assumption of infinity that is responsible for much of the confusion. Randomness, like infinity, can only be defined in terms of its opposites (Falk 1991). Positing an infinite set or sequence involves permanent expansion of a finite concept which results in paradoxes (Falk 2010). Infinity cannot be grasped in its simultaneity because it has no boundaries. As noted by Péter (1957/1976), “the infinite in mathematics is conceivable by finite tools” (p. 51). At the same time, abstract definitions of infinity (which are easy to manipulate mathematically) are taken as the benchmark against which subjective randomness behavior is judged. Recent psychological evidence suggests that the mismatch between subjective and objective definitions of randomness tends to disappear once the unrealistic premise of infinity is removed (Hahn and Warren 2009).

According to a strict definition, it is not possible to know whether a source is random because such a source can produce any output. We cannot know anything about the properties of a truly random source—its distribution parameters for instance. A truly random process can change randomly and is completely unknowable (Chaitin 2001). The existence of randomness cannot be proved (Ayton et al. 1989; Bar-Hillel and Wagenaar 1991; Chaitin 1975). Randomness, in its strict meaning can never be achieved (Calude 2000). Randomness refers to an ideal state which is described in different ways—a state in which all symbols and combinations of symbols are mutually independent and/or represented equally, a state that is perfectly unpredictable. To illustrate, Falk (1991) defines a random process as “…a mechanism emitting symbols… with equal probabilities, independently of each other and of previous history of any length” (p. 215). Straight away, we encounter problems. If symbols are generated by a single agent, they can never be completely independent.Footnote 1 The defining property of randomness, namely equiprobability, implies that all possibilities (combinations of elements) of any length are equally represented. Equiprobability represents an attempt to reconcile the strict frequents definition of randomness which requires infinity and the limited informational reach of the human observer. It cannot be achieved in practice because it relies on infinity. Specifically, for finite sequences of reasonable complexity it is not possible for all elements (bigrams, trigrams, n-grams) to be equally represented (Hahn and Warren 2009).

A strict definition of randomness involving infinity cannot be used to create probabilistic models, which must be sufficiently complex to simulate the richness of real-life processes while remaining simple enough to be amenable to mathematical treatment (Volchan 2002). Shannon’s information theory (Shannon 1948) posits a known random source. Yet, if the source is known, it cannot be random. If a particular probability distribution is adopted, the source is constrained in a way that negates its randomness. In order to make any sense of presumably random data, one has to assume that the source does not change in any appreciable way during production. This was clearly illustrated by Attneave (1959, p. 13) who admitted that it would be impossible to say anything about a message if the generating probability distribution changed over time (i.e. if it was not ergodic).

The infinity requirement informs the distinction between random processes and random products (e.g. Brown 1957; Nickerson 2002). Some authors define randomness as a property of the generating process (Wagenaar 1991). As shown above, defining a random process is strewn with unknowns. What about the output? According to Falk (1991), it is not possible to give a positive definition of the output of a random process. Lopes (1982) pointed out the problem which faces anyone who has ever had to sample a random sequence, namely, should one accept the output of a presumably random process even if the output itself is structured? The paradox of engineering randomness has been highlighted by Gardner (1989): “The closer we get to an absolutely patternless series, the closer we get to a type of pattern so rare that if we came on such a series we would suspect it had been carefully constructed by a mathematician rather than produced by a random procedure” (p. 165).

The process/output distinction is ultimately unhelpful because of the absence of a tractable relationship between the two domains. Seemingly random outputs can be produced by deterministic processes. Here, a well-known example is the decimal expansion of pi, which successfully passes most tests of randomness (Jaditz 2000). Conversely, random (or highly complex) processes can and do produce structured outputs. A similar problem exists in the domain of algorithmic complexity (Li and Vitanyi 1997). In a recent review of complexity research (Aksentijevic and Gibson 2012b), it has been suggested that algorithms do not represent a useful starting point in complexity research because apparently simple algorithms can produce complex outputs and vice versa.

If mathematical randomness represents a negative idealization that is completely inaccessible to the observer, what about the output of random number generators? It is clear that such programs increase the complexity of the generating process to the point at which no pattern, order or regularity can be detected within a time window of reasonable size. Eagle (2005) described this more precisely: “This is the idea that in constructing statistical tests and random number generators, the first thing to be considered is the kinds of patterns that one wants the random object to avoid instantiating” (p. 10). As long as there is any connection between the human agent and the process, the process cannot be called random, because its output will always contain faint echoes of structure (e.g. Lopes 1982, p. 629).

To recapitulate, randomness in its strict meaning can never be related to the human observer because it represents a transcendental abstraction involving infinity.Footnote 2 The difficulty in reconciling the mathematical concept of randomness and subjective complexity has led to the gradual weakening of the former. In order to be used at all, processes under investigation have to be simple and ergodic. Departures from simplifying constraints complicate mathematical treatment to the extent that the models become unusable. It is easy to see that the formulation of stochastic models is governed by the limited ability of the human observer (statistician) to assimilate and describe long and/or complex patterns. Theoretical distributions represent simplified idealizations of imperfect and noisy empirical information. As the sample size increases, individual cases lose on importance and the data are eventually fitted to one of a number of smooth, mathematically and cognitively tractable statistical models (Hammond and Householder 1962). Somewhat paradoxically, such human creations are taken as objective reference in randomness research.

1.2 Why humans can’t randomize

Randomness is used in science to denote disorder and absence of pattern or meaning. As argued in this section, numerous investigations of subjective randomness behavior have come to the same conclusion. Specifically, human observers associate randomness with high complexity. Two well-documented phenomena reflecting the mismatch between subjective predictive behavior and objective definitions of randomness are the “hot hand” (Bar-Eli et al. 2006; Gilovich et al. 1985) and “gambler’s fallacy” (e.g. Ayton and Fischer 2004; Tune 1964). These two biases reflect the tendency respectively to overestimate and underestimate the length of a sequence of identical outcomes relative to that predicted by some probabilistic model and are often used to illustrate the lack of statistical sophistication of observers faced with highly complex situations (Alter and Oppenheimer 2006). They reflect the two complementary facets of human response to complexity and are both underpinned by the limited analytical ability of the observer.

“Hot hand” describes the belief that a player’s chances of scoring in a basketball game are greater following a hit than following a miss. In other words, runs of hits are assumed to reflect improvements in performance and not chance fluctuations. This belief is prevalent (91 % of people questioned by Gilovich et al. subscribed to it) and persistent despite the overwhelming evidence that no pattern could be discerned in the statistical analysis (e.g. Dawes 1988). Yet, a game of basketball is not a random process governed by chance. A large number of highly interrelated factors conspire to generate outcomes that might or might not produce small-scale patterning accessible to human observers. Players do experience surges of concentration which can improve performance (Burns 2004; Gilden and Wilson 1995). Observers are aware of some of these factors and base their predictions on their limited ability to cope with complexity. More precisely, they apply small-scale patterning to a large-scale context. They cannot comprehend the totality of the situation and naturally seek meaning in partial causal snapshots (Keren and Lewis 1994). Gambler’s fallacy describes the opposite tendency of gamblers in a game of roulette to bet on red after the run of blacks and vice versa and is also called negative recency (Bar-Hillel and Wagenaar 1991; Rapoport and Budescu 1997).Footnote 3

These behaviors are not caused by some peculiarity of the human cognitive system but reflect a misunderstanding of the relationship between the observer and the abstract concept of randomness. There is nothing unsophisticated or pathological about favoring patterns (hot hand) or associating randomness with change (gambler’s fallacy). The fact that the processes under consideration are too complex to be described in terms of simple patterns should not be viewed as a shortcoming because the urge to find order in complexity is the prime motivation for science and other socially desirable forms of abstraction. The two biases can be explained in terms of a (sensible) strategy of meaning seeking. When presented with complex information, observers search for clues that could help them assimilate it. One important clue is the source of information. Patterned information is correctly associated with human generators and disordered information, again correctly, with non-human sources (coins, roulette; Ayton and Fischer 2004; Boynton 2003; Burns and Corpus 2004; Matthews 2013).

Human observers fare equally poorly in the numerous studies of subjective randomness. Since the 1950s, psychologists have devoted a great deal of effort to investigating subjective response to randomness (see e.g. Oskarsson et al. 2009 for a review). The results of this research can be summarized thus: humans are poor at perceiving, understanding and producing randomness. In the words of Gilovich: “People’s intuitive conceptions of randomness depart systematically from the laws of chance” (Gilovich et al. 1985, p. 296). According to Hahn and Warren (2009), observers associate randomness with irregularity (disorder; see also Kahneman and Tversky 1972), equiprobability and alternation. These three properties are indicative of the decreased tolerance for long runs (gambler’s fallacy) in response to a non-human agent. If we remove the ideal of perfect unpredictability, they also provide a good description of a random pattern. Disorder is an important factor in objective measures of entropy and randomness (e.g. Feynman et al. 1963). Human predilection for pattern which is at the core of the biases described above, underpins perception and cognition. It also explains the striving for equiprobability (which incidentally is demanded by mathematical definitions of randomness). Humans conceive of randomness in terms of change which involves a balanced yet irregular representation of different symbols within a short time window. This is the cause of negative recency (gambler’s fallacy) effects (e.g. Tyzska et al. 2008). Unequal representation of different outcomes implies imbalance and patterning—after all, this is how informational redundancy is objectively defined and computed (see Attneave 1959). Finally, the subjective concept of complexity as change explains observers’ predilection for alternation. With the exception of strictly periodic outliers (e.g. 01010101…) simple patterns tend to contain few runs or alternations (e.g. 0000001 as opposed to 01100101). To summarize, observers associate randomness with complexity, that is, irregular change (Aksentijevic and Gibson 2012a). As shown further in the text, the notion of change underpins complexity and ultimately randomness.

2 Randomness and complexity

2.1 Randomness and structure

Subjective randomness/complexity perception depends on many factors (e.g. Lordahl 1970), which could be roughly divided into structural and semantic (e.g. Garner 1962, p. 141). While the two domains cannot be completely disentangled, they can be viewed as distinct. The former include different forms of regularity—symmetry, palindromicity and periodicity. Semantic enrichment consists in linking structure to additional extraneous information. Although not explicitly present in a stimulus, this information is associated with it through implicit rules. A simple example is the binary representation of numbers from zero to nine: 0000, 0001, 0010, 0011, 0100, 0101, 0110, 0111, 1000 and 1001. Although individual representations differ structurally, these differences do not reflect fully the implicitly encoded information, namely, that individual symbols have different values depending on their place in the sequence. Once this hidden semantic aspect (positional weighting) is revealed, the meaning of the symbols becomes clear. In the rest of this paper, I focus on structure which in the absence of other clues signals the presence of meaningful information.

The universally observed departures from probabilistic randomness have to do with structure or absence thereof. Although the importance of the notions of structure, symmetry, order and pattern for all aspects of human life has been considered self-evident throughout history, the first serious attempt to frame these concepts within psychology was offered by Gestalt psychologists (Wertheimer, Koehler and Koffka) in the 1920s. They proposed that observers organize perceptual and cognitive information according to a number of simple rules, most important of which are the principles of proximity and similarity. Perceptual scenes and cognitive schemas are organized in a way that minimizes the expenditure (correctly, conversion) of energy. This is what the Gestaltists termed the “Minimum principle” (Koffka 1935). Patterns are considered good if they are symmetrical or repetitive, or predictable. By contrast, poor patterns are complex, asymmetrical and defy easy description.

Structure is intimately related to subjective complexity/randomness. As confirmed by experimental research (Roney and Trick 2003), human observers interpret random sequences using Gestalt principles. The more structurally complex a pattern, the more difficult it is to compress or describe in simple terms. It contains more information than a simple pattern and is less predictable. It is less symmetrical (but note the counterargument below) and will change more if subjected to various operations such as reflection. The observer searches for informational shortcuts provided by the notions of symmetry, similarity and regularity in order to minimize the expenditure of energy or the cost associated with understanding.

Patterns contain more information than is conveyed by a list of frequencies of occurrence of all combinations of symbols (Attneave 1959). Symmetries, periodicities and clusters signal presence of potentially meaningful information and these are not captured by the quantitative description of the pattern as given by its probability profile. One of the most robust findings in subjective randomness research is a consistent shift in the subjective randomness function relative to the probabilistic model. Specifically, subjects tend to assign high randomness scores to sequences possessing a large number of (irregular) alternations between symbols (Falk and Konold 1997). This reflects the fact that they base their judgments on the structural complexity of the pattern which in turn reflects the cost of information processing (Aksentijevic and Gibson 2012b).

The consideration of structure allows us to answer the question of why human observers fail to behave like randomness generators. When faced with a complex sequence, observers seek out structure in order to understand. Once they realize that no structure can be discerned, they treat the sequence as random (Alberoni 1962). Contexts involving human agents such as basketball games reinforce the belief that the sequence under observation is structured and thus meaningful—human agency is appropriately associated with regularity and simplicity. This involves looking for unbroken runs of identical outcomes which are seen as evidence of intentionality and determinism. By contrast, if a process is believed to be random and intractable (as in the case of roulette), long runs are considered atypical and the focus shifts to alternation and irregularity (Alter and Oppenheimer 2006).

2.2 Randomness as complexity

One question that exposes the circularity at the heart of randomness research is “If people were good at producing random patterns, why would they need to rely on coin tosses, roulettes and algorithms?” The answer is that the awareness of our own dependence on pattern and structure (e.g. Coward 1990) compels us to seek ways of pushing the complexity and unpredictability of a process outside of the grasp of unaided perception and cognition. This is especially important when we need to eliminate biases and ensure fairness.

It is argued here that randomness represents an abstract formalization of the intuitively tractable concept of complexity. Complexity can take different phenomenological forms. A common definition refers to its original Latin meaning of “intertwined”. A multitude of interrelated and interdependent elements suggests a difficulty in comprehending the whole. The difficulty in understanding a phenomenon (i.e. in arriving at a theory or model compatible with human pattern-based perception and cognition) is proportional to its scale and complexity. When the complexity increases to the point at which no pattern can be discerned, the perception is of a homogeneous, incomprehensible jumble. In visual perception, individual objects become blurred and take on the appearance of irregular textures. In music, stable harmonies are replaced by cacophony and ultimately, white noise. Both of these outcomes (visual and auditory) are phenomenologically random. Due to our transience, scale constraint and limited analytical ability, we are compelled to simplify, compress and categorise and our view of the universe is ultimately limited by the boundaries of magnitude and structure (MacKay 1950).

The explanation for the apparently faulty subjective randomization performance lies in the scale limitation of the human cognitive system which operates on short patterns due to well-documented cognitive constraints (e.g. Cowan 2001; Miller 1956) as well as logistical limitations. Randomness generators produce sequences that contain no discernible short-scale patterning. For instance, a slightly biased coin might require tens of thousands of tosses before a significant departure from randomness is detected. From the perspective of the observer, the patterning within such complex processes is confined to high levels of the structural hierarchy. Consequently, it is not surprising that the observer would find it impossible to detect a pattern that might take over a week to unfold fully. Even if they waited that long, they would have to apply various forms of computer-based statistical analysis in order to detect the departure from equiprobability. This is why scientists rely on large samples and sophisticated apparatus to detect weak patterns in noisy data. The inability to access higher levels of structure has been observed even with short patterns. When asked to memorize or recall briefly presented stimuli, subjects tend to rely on low-level information (number of runs; Glanzer and Clark 1962; Ichikawa 1985). Faced with long sequences whose origin they know nothing about, observers sensibly draw on different forms of structure (periodicity, symmetry etc.) in order to assimilate them.

They will treat as random those patterns that are too complex to be comprehended. Forgetting that objective definitions and procedures ultimately originate in subjective perception can lead to circularities and conceptual dead ends. In order to address this, I propose that both subjective and objective definitions of randomness are derived from subjective complexity—from the effort required by the observer to process a pattern. The ultimate arbiter of randomness must be the human observer whose predilection for order defines randomness as its opposite. It is the human observer who manipulates complexity algorithmically in order to create a random process. People use a fair coin to arrive at a fair decision because they know from experience that the outcome of a toss cannot be predicted.

Here, a useful distinction can be made between available and accessible information (Aksentijevic and Gibson 2012b). The former refers to any objective definition of information. In information-theoretical terms, the available information contained within a pattern corresponds to the totality of frequency distributions of all combinations of symbols. In Algorithmic Information Theory (Chaitin 1969; Kolmogorov 1965; Solomonoff 1964), the available information is given by the uncompressed (fully unfolded) form of the string. Theoretically, available information can grow to infinity (as a function of string length and its structural complexity). This is not the case with accessible information, which depends on the cognitive capacity of the observer. As the complexity of a pattern increases, the amount of accessible information gain gradually slows down and eventually asymptotes relative to available information. This is the point at which the observer gives up and declares the pattern to be random or noise. I propose that randomness reflects the complexity boundary beyond which the observer cannot relate to the observed phenomenon any longer.

2.3 Randomness as the limiting value of complexity

One critical point that is often missed in various accounts of complexity is that information processing incurs costs (Aksentijevic and Gibson 2012b). Complex information is more difficult to process, assimilate and understand. Simple patterns are readily processed, interpreted, memorized and assimilated into extant cognitive schemas. As they become more complex, more effort is needed to analyze and understand them. Once the amount of change exceeds a certain threshold, the pattern is perceived as incomprehensible and intractable, in other words, random. Tractability of a process is inversely proportional to its complexity and even highly complex processes can be understood and assimilated given sufficient time and effort. One illustrative example of this is the successful breaking of the German Enigma cipher by Polish and British mathematicians, notably Alan Turing (e.g. Copeland 2004), who relied on a combination of semantic and structural analysis to decipher the ostensibly unbreakable code.

Perhaps the most primitive notion linking cognitive processing and cost is change. Change can be said to precede notions such as symmetry, regularity, periodicity and simplicity, which are defined in terms of change (Cutting 1998).Footnote 4 In addition, change reflects the dynamic and relational nature of perception and cognition. Change is well suited to indexing the cost of information processing in the sense that any action by an agent, human or non-human, results in a change. More generally, any conceivable action is accompanied by an irreversible conversion of energy or increase in entropy. I propose that any action, however trivial, by an agent of limited life span must incur cost and that this cost equals complexity. Processing change always costs more than processing no change. Ultimately, all instances of increasing entropy relate to the human observer. Recently, we reported a measure of pattern complexity based on a simple definition—amount of change at different levels of a structural hierarchy (Aksentijevic and Gibson 2012a). There, it was proposed that change represents the most suitable property for defining and quantifying both subjective and objective complexity (and randomness). The measure has been shown to account for data collected in over 50 years of complexity and randomness research in psychology allowing us to theorize on the nature of the relationship between complexity and randomness. Human observers associate randomness with the degree of chaos, noise, intractability and unpredictability that transcends their analytical ability. Crucially and in the context of this work, subjective randomness is closely related to the amount of irregular change present in a pattern. Psychological research has indicated that change is more difficult to process than the absence of change as demonstrated by the mismatch between symmetrical theoretical distributions and the negatively skewed subjective complexity distributions (Falk and Konold 1997).

In contrast to the view which associates randomness with high complexity (e.g. Kolmogorov 1965), some authors propose the opposite. Equating randomness and simplicity has contributed to the reluctance to explain randomness in terms of complexity. To illustrate, in an article describing a measure of physical complexity, Adami and Cerf (2000), state: “Our intuition demands that the complexity of a random string ought to be zero, as it is somehow ‘empty’” (p. 64). A slightly different formulation was offered by Grassberger (1986): “We are faced with the puzzle that no accepted measure of complexity could, e.g. corroborate that music written by Bach is more complex than the random music written by a monkey” (p. 321; see also Gell-Mann 1995). Thus, although intractable, monkey music must be simpler than the orderly and pleasing pieces by Bach. Finally, random, high-entropy states in chemistry have been described as simple and symmetrical (e.g. Coren 2002).

Randomness can only be described as symmetrical or simple because of the limited perceptual/cognitive resolution of the observer. In other words, at a point at which complexity becomes too high to be addressed analytically, the appearance is of a simple, homogeneous texture (e.g. visual white noise) containing no accessible information. Any region of a random ensemble is identical (or equiprobable) to any other. This definition ignores the deeper level of explanation, namely, that apparently random phenomena contain vast amounts of information, which is currently inaccessible and thus irrelevant to the observer (Aksentijevic and Gibson 2012b). Far from being simple or symmetrical, randomness is an idealization (simplification) aimed at describing the region of high complexity. This disposes of the inconsistency of high complexity inexplicably transmuting into perfect simplicity.

3 Conclusions

This paper has briefly outlined some of the problems associated with the use of randomness in psychological research.Footnote 5 Most of these arise from the dominance of objective, probabilistic models of randomness which are taken as standards against which human performance is judged. Departures from probabilistic norm are interpreted as reflecting the inability of human observers to comprehend and mimic random processes. The above discussion forces the question of how are human observers supposed to mimic perfect unpredictability which escapes the most sophisticated algorithms? Less ambitiously, how are they supposed to mimic different probabilistic models? It is clear that this goal is unattainable. If we ask ourselves “why should they?” we must conclude that the goal is not sensible either. Why would we expect observers whose cognition is based on order and pattern to be able to produce sequences that are completely patternless or that conform to some human-generated stochastic model? Humans are poor at generating and detecting randomness for two reasons. First, as already stated, the strict definition of randomness is intrinsically opposed to human perception and cognition. A random process can generate any outcome, leaving observers completely helpless. If they label a disordered sequence random, they will be told that this is no more random than a sequence of zeros, or that they are committing the gambler’s fallacy—forcing them to suppress their (correct) intuition which says that ordered patterns are likely to be generated by a deterministic process and that random patterns come from complex processes which they cannot understand. Equally, if they characterize an ordered pattern as non-random (hot hand), they will be informed that they are wrong and that streaks are often produced by random processes.

Second, objective models of randomness and randomness generators show human performance to be deficient because they have been created in order to overcome human predilection for pattern. The conflict between subjective complexity and objective randomness is not a consequence of inability to grasp a well-defined theoretical concept. As discussed here, even experts have trouble defining randomness. Rather, humans use a wide range of aids in order to generate unpredictable outcomes (e.g. coin tossing, randomization algorithms) precisely because they are aware of their limited ability to mimic complex behaviors. More generally, unpredictability lies at the core of the widespread interest in sport and games of chance. Objective models of randomness are superior to human observers because they can draw on large and complex data streams with little effort. By contrast, observers can sample only very limited spatiotemporal slices of the world. For any reasonably complex process, observers can only access a small (often vanishingly small) portion of its output. It should come as no surprise that they produce patterns which represent scaled-down versions of real-world (or algorithm-generated) processes. This observation exposes the circularity at the heart of the law of small numbers (Tversky and Kahneman 1971), the supposedly erroneous belief that the balance of random outcomes on a large scale is repeated on smaller scales.

All of the above suggests that the objective study of randomness behavior is ultimately circular—objective models of randomness represent abstract extensions of subjective notions such as disorder, which give rise to those very models. The circularity lies in the fact that objective models are used to investigate and judge the performance of the cognitive system which created the models in the first place. As in other situations in which objective measurements are used to probe perception and cognition, the subjective origin of these objective tools is ignored. It is assumed that objective measurement and mathematical models exist in a pure, abstract universe that is not subject to idiosyncrasies of human psychology (Fig. 1a). Applied to the area of subjective randomness research, this view has generated a circular conceptual framework creating the impression that human ability to model randomness is flawed.

Fig. 1
figure 1

Two views of randomness. a Abstract concept of randomness that is currently used as a standard against which randomness performance is judged is conceptually separate from its imperfect subjective counterpart. b Randomness (both subjective and objective) reflects a flexible cognitive boundary on the subjective complexity continuum. Subjective complexity represents the starting point for all objective measures of complexity and a true referent of randomness

At the core of the circularity lies the failure to acknowledge the subjective origin of the concept of randomness. Subjectivity should not be viewed as a problem (Nickerson 2002, p. 334), but as the origin of objective models and theories that should represent the reference against which objective models are judged (Fig. 1b). Stochastic models serve to annul, equalize and remove biases and traces of recognizable structure. Yet, the originator of the models is the human mind whose effort is governed by the inherent predilection for uniformity, order and pattern. The only sin observers are guilty of is their limited ability to cope with the quantity and/or complexity of the information which has been produced with the express aim of confounding them.

The only way of overcoming this problem is to abandon the notion of randomness and replace it with complexity. As shown here, this entails no loss of precision or clarity. On the contrary, a complexity-based interpretation of randomness reestablishes the broken link between subjective cognition and the objective models that it creates. For the purpose of generation, randomness should be defined as the highest level of complexity attainable under a particular set of circumstances. The definition applies equally to human subjects and algorithms. It reflects the effort needed to produce truly random outputs and acknowledges the human origins of randomness. It also exonerates RAND mathematicians who were reportedly caught correcting the corporation’s random number compendium (Gell-Mann 1994, p. 44). I propose that the notion of change provides a way of defining complexity in a way which bridges the gap between the subjective and objective domains and accounts for a substantial proportion of the variance in subjective randomness perception. It does so by bringing together physical concepts of energy and entropy on the one hand, and the subjective psychological notions of order and regularity on the other.

The complexity boundary can be pushed further away from the perceptual baseline but only at a cost. In terms of perception and judgment, randomness should be defined as the level of complexity that defies analysis or interpretation at any given time. As is the case with the previous definition, this one is flexible and context dependent. To illustrate, sophisticated algorithms can produce outputs that can be considered random for most practical purposes. However, many examples show that patterning can be found even in the most random-appearing strings given sufficient effort (Gardner 1989). Complex domains that are considered intractable eventually yield useful information as a consequence of systematic investigation (e.g. junk DNA).

In conclusion, randomness can be defined as an idealized upper boundary of complexity. Re-conceptualizing randomness in terms of complexity could have multiple benefits not least of which would be an intuitive and easily understandable definition we could offer first-year psychology students (Aksentijevic 2015).