Keywords

1 Introduction

This paper deals with studies initiated more than 20 years ago [50] and that are still in progress today. It exposes a summary of the theoretic scaffolding used to build an experimental paradigm that allows for a naturalistic study of human anticipation, i.e., “future-oriented action, decision, or behaviour based on a (implicit or explicit) prediction” [78].

The argumentative style chosen to deliver the exposition, rests heavily on the use of original quotes taken from the authors involved, giving the readers the chance to perform their own exegesis.

The importance of what is exposed here should be pondered by its practical outcomes, as Paul A. Weiss suggested; “the validity of a scientific concept is no longer decided by whether it appeals to “common sense”, but by whether it “works”” [97].

The purpose of the first section of this paper is to advocate a new definition of “Executive functions” (EF) based on its lost original sources namely: the Soviet theory of functional systems (FS) of Anokhin and Bernstein.

The second section has three subsections, in the first one, some of the key notions of the theoretical convergence between Anokhin, Bernstein, Peirce and Piaget are highlighted. In the second subsection an inferential engine is depicted using the mature inferences theories of Peirce and Piaget. In the third subsection, an inferential cyclic structure is identified at the core of cognitive FS and it is postulated as the “engine or the executive” of EF.

The third section presents a diagrammatic model that displays the dynamics of the inferential engine, allowing its representation and further statistical modelling using Euclidean and non-Euclidean measures. The model is substantiated in a problem-solving task presented to human and non-human solvers. Moreover some representative results from our previous studies are illustrated and future applications advanced.

2 Executive Functions

The doctrines of the control and regulatory role of the brain in cognition and behaviour were already known in ancient Greek medicine and philosophy since the sixth Century B.C.E. and reached their climax with Galenus in the second Century A.D. [24]. Nevertheless, the mechanisms by which the brain controls and regulates cognition and behaviour were just beginning to be understood in the 19th Century in parallel with the knowledge of the functions performed by the brain’s frontal lobes.

The concept of EF has played a pivotal position in the elucidation of how the brain relates to cognition and behaviour; nevertheless its definition has become one of the most difficult conundrums for neuroscientists and clinicians [13, 27, 34]. Despite the controversy in defining the concept of EF, more than 30 different definitions have been given to date [34]; the currently accepted view considers a wide set of more or less independent neurocognitive processes and abilities i.e.,

  • Thinking, reasoning and problem-solving.

  • Anticipating, planning and decision-making.

  • The ability to sustain attention and resistance to interference.

  • Utilisation of feed-forward, feedback and multitasking.

  • Cognitive flexibility and the ability to deal with novelty [19, 99, 100].

There has been a great deal of misunderstanding surrounding the concept of EF since its inception. Most scholars trace back its origin to Alexander Luria, but they mysteriously got lost there, like in the story of the search for the source of the river Nile.

The case is that when Luria [60, 64] was re-examining the meaning of the concept of “function” in relation to mental activity, few scholars noted that his references pointed to two Soviet researchers: Pyotr Kuzmich Anokhin and Nikolai Aleksandrovich Bernstein.

To our knowledge, besides Luria; only Ch. Fernyhough, O. A. Semenova and A. Kustubayeva, in a triad of insightful reviews, have remarked the relation between functional system theory and executive functions. These last authors did not however dig deep enough into Anokhin and Bernstein’s works, to extract more fruitful consequences of this relationship [31, 49, 88]. According to Luria:

… a function is in fact a functional system (a concept introduced by Anokhin) directed toward the performance of a particular biological task and consisting of a group of interconnected acts that produce the corresponding biological effect [64].

Moreover, Luria rightly states that this meaning of “function”, although in a simplified manner, was already used by Hughlings Jackson; when analysing the anatomical and physiological localisation of movements in the brain:

For the nervous centres do not represent muscles, but very complex movements, in each of which many muscles serve [41].

As early as 1928, Luria had already been using a rudimentary version of the concept of FS when he was adapting his motor method to study the affective reactions [59], as well as in his 1930 co-authored paper with Vygotsky [96].

Contemporary motor control scholars have kept alive only Bernstein’s version of the functional systems theory under the name of “synergies”, forgetting Anokhin’s contributions [25, 44, 55, 56]. In contrast, both Anokhin and Bernstein’s ideas have thrived among ergonomics scholars [14, 18].

It is necessary to accentuate that the original concept of EF is not pointing to an independent system, that controls and regulates cognition and behaviour. In fact EF is an intrinsic aspect of cognition and behaviour that develops and exists only during a cognitive activity or behaviour and does not exist by itself. As Bernstein and Anokhin put it:

we want to emphasize that control and controllability never and nowhere come into being in isolation, as phenomena that exist just for their own sake. Control is needed whenever a task is set, a goal is determined that has to be reached [30].

Every functional system possesses regulatory properties which are inherent only in its integrated state and not in its individual components [9].

Luria, following Anokhin and Bernstein, depicted cognitive processes as complex functional systems:

that they are not “localized” in narrow, circumscribed areas of the brain, but take place through the participation of groups of concertedly working brain structures, each of which makes its own particular contribution to the organization of this functional system [62].

Furthermore Luria distinguishes three principal “functional units” in the brain, whose participation is necessary for any type of mental activity, i.e.:

  1. 1.

    A unit for regulating tone or waking

  2. 2.

    A unit for obtaining and storing information arriving from the outside world

  3. 3.

    A unit for programming, regulating and verifying mental activity.

The third unit is involved in what at present is called “executive functions”, Luria clearly states that:

It would be a mistake to imagine that each of these units can carry out a certain form of activity completely independent … Many years have passed since psychologist regarded mental functions as isolated “faculties” [62].

Regarding the notion of faculties, Luria recalls his friend and co-worker:

The famous Soviet psychologist L. S. Vygotskii made the decisive contribution to the development of scientific psychology when he stated that psychological processes are not elementary and inborn “faculties,” but are, rather, formed during life in the process of reflection of the world of reality, that they have a complex structure, utilizing different methods for achieving their goal, which change from one stage of development to the next. He considered that the most important feature characterizing higher psychological functions is their mediated character, the fact that they rest on the use of external aids (tools for movements and actions, language for perception, memory, and thought) [65].

Noteworthy, every aspect that has been mentioned in the latest reviewed literature as being fundamental for EF, (see Sect. 2, third paragraph), can be seen as subordinate concepts (species) of an extensional definition [70, 92]. Furthermore these subordinate concepts can be mapped directly onto the main “operation stages” of functional systems [13, 27, 35, 46, 57], see (Fig. 1, and Sect. 2.1). These former considerations allow for the advancement of the nominal, theoretical definition [37, 38] of executive functioning proposed here.

Fig. 1
figure 1

General architecture of a functional system according to Anokhin [2]. In the diagram: SA is starting afferentation, CA is contextual afferentation. And the operation stages of the functional system are: preparation for decision making (afferent synthesis), decision making (selection of an action), prognosis of the action result (generation of acceptor of action result), generation of the action program (efferent synthesis), performance of an action, attainment of the result, backward afferentation (feedback) to the central nervous system about parameters of the result and comparison between the result of action and the prognosis

This definition of EF is re-linked to the original sources of the concept, paraphrasing Anokhin: executive functions are any of: “those specific mechanisms of the functional system which provide for the universal physiological architecture of the behavioral act” [8].

Additionally, this definition could be conveniently summarised and operationalized using the inferences theory proposed by Peirce and Piaget [87, 83], see (Sect. 3).

To illustrate this definition, in the next subsection, a diagram is presented and a brief review of those “specific mechanisms” as they were portrayed by Anokhin.

2.1 The Architecture of the Behavioral Act

According to the theory developed by Anokhin in 1932–5, a functional system is always a dynamic central periphery formation of a cyclic character, that not only includes the central nervous system, but the whole body and the results of the actions exerted by the subject upon the environment. All functional systems, independent of their level of organisation, have the same functional architectonics, where the result is the leading factor to achieve a stable organisation.

The composition of the functional system is not limited to the central nervous structures which fulfil the most delicate integrating role in its organization and impart the appropriate biological property to it. It must, however, be remembered that this integrating role is necessarily manifested in the central- peripheral relations by which the working periphery determines and implements the properties or the functional systems, which adapt the organism to the given dynamic situation by their end effect [9].

Moreover, Luria, paraphrasing Anokhin and Bernstein’s concepts, writes in a 1973 review:

Every behavioral function is really a functional system, which preserves a stable goal but uses different links of operative behavior to come to a desired result. It is obvious that, in all these cases, there is not only a certain “feed-back” needed for control of the effect of behavior, but also a certain “feed-forward,” which establishes plans and programs and which is of decisive importance for elaboration of complex forms of behavior [61].

In the following subsections a diagram and a summary explanation of the architecture of a FS is provided according to its sequence of stages of operation, i.e.:

  1. 4.

    Preparation for decision making (afferent synthesis),

  2. 5.

    Decision making (selection of an action),

  3. 6.

    Prognosis of the action result (generation of acceptor of action result),

  4. 7.

    Generation of the action program (efferent synthesis),

  5. 8.

    Performance of an action,

  6. 9.

    Attainment of the result,

  7. 10.

    Backward afferentation (feedback) to the central nervous system about parameters of the result,

  8. 11.

    Comparison between the result of action and the prognosis [9].

Afferent Synthesis. Afferent synthesis is an essential and universal stage in the enactment of any cognition, behavioural act or conditioned reflex. Afferent synthesis is the first stage in the operation of a FS that orders and integrates information simultaneously from the following sub-stages: (a) motivational excitation, (b) situational afferentation, (c) triggering afferentation and (d) memory mechanisms.

  1. (a)

    Motivational excitation: this sub-stage represents the organism’s needs to be satisfied by the behavioural act. The motivations can be created by nutritional, hormonal, cognitive processes (problem solving) or social needs. The dominant motivation helps to filter and actively select the relevant information by means of attentional processes, like orienting-investigative reactions, to set up the goals and actions to achieve the appropriate adaptive effects.

  2. (b)

    Situational afferentation: this sub-stage is composed by the total sum of stationary and serial afferences coming from the environmental setting, in which the behavioural act is framed; leading to a pre-triggering integration of actual and latent excitations.

  3. (c)

    Triggering or starting afferentation: is the sub-stage composed by the actual manifestation of all the latent excitations in an active, and discrete moment which maximises the success of the adaptation.

  4. (d)

    Utilisation of memory mechanisms: in this sub-stage, which is essential for afferent synthesis, all the previous sub-stages and the stored, relevant past experiences are linked together as a whole.

The correct functioning of all these sub-stages is assured by the constant active process of attentional mechanisms, like orienting-investigative reactions, that updates the incomplete or insufficient information coming from the different sub-stages.

Afferent synthesis is the main stage responsible for the establishment of goal directed behaviours. In a comment about the “creative” nature of this stage, Anokhin, following Pavlov’s ideas stated that:

As can be seen from the enumeration of four components of the initial stage of development of behavioural acts, it is a truly all-embracing one. It is precisely at this stage, as in no other stage of development of the behavioural act, that Pavlov’s prevision of the decisive (the ‘so-called creative’) role of the afferent function of the cortex of the great cerebral hemispheres applies [6].

Decision Making. The stage of decision making is crucial in the consolidation of all the qualitative information of afferent synthesis and is always preceded by it. In this stage the organism “chooses” to enact one particular type of behaviour among an infinite repertoire of possibilities, in order to achieve its goal. Anokhin remarks that this stage is a kind of “logical process” that has been “designated as ideation, or as a state of the eureka type” [9]. This account is astonishingly similar to the description given by Peirce of the experience of abductive inference:

The abductive suggestion comes to us like a flash. It is an act of insight, although of extremely fallible insight. It is true that the different elements of the hypothesis were in our minds before; but it is the idea of putting together what we had never before dreamed of putting together which flashes the new suggestion before our contemplation [75].

The decision making stage is not only present in behavioural acts, but also in autonomic functions, like respiration or blood pressure control mechanisms.

Thus, the physiological meaning of decision making in the patterning of a behavioral act lies in three highly important effects:

  1. 1.

    Decision making is the result of afferent synthesis accomplished by the organism on the basis of the dominant motivation.

  2. 2.

    Decision making eliminates superfluous afferentation, thereby promoting the formation of an integral of efferent impulses having adaptive value for the organism in a specific situation.

  3. 3.

    Decision making is a critical moment after which all combinations of impulses assume an executive, efferent character [9].

Generation of the Action Program and the Acceptor of Action Result. Immediately after efferent synthesis and the decision that sets the goal, two simultaneous events take place:

  1. (a)

    The creation of an action program (derived from the decision making stage) containing the procedural scheme of efferent excitations to be implemented by the peripheral effector organs intended to produce a result.

  2. (b)

    The creation of a mechanism, called “acceptor of action results”, that contains the efferent parameters of the anticipated results of the action to be performed. The efferent parameters are anticipated from predictions based on the logical consequences expected from the actions to be performed.

Performance of Action and Attainment of Results. These two stages correspond to the execution and to the actual performance of the action, as patterned in the action program and also to the actual results obtained.

Backward Afferentation (Feedback). This stage is composed by the stream of afferent impulses, that carries the actual parameter of results, travelling in an opposite direction to the action impulses, to reach the acceptor of action results, closing the behavioural loop.

Comparison Between the Result of Action and the Prognosis. This stage completes the behavioural cycle. In this stage, the actual parameter of results just obtained are compared with the predicted parameters of the acceptor of action results. Depending on the results of this comparison, if the desired results are not obtained a new cycle is initiated, passing this information to create a new afferent synthesis. On the contrary if the desired results are achieved, the cycle ends.

To summarise: the mature concept of EF is rooted in Anokhin’s [7, 8] and Bernstein’s [16, 98] theory of “functional systems”, and Vygotsky methodological approach [95, 101]. This was later used by Luria to analyse the disturbances of higher cortical functions provoked by local brain lesions [17, 61, 63, 64], particularly Luria’s work on frontal lobe or dysexecutive syndrome, that was key in differentiating the EF from other aspects of cognition [49].

3 Anokhin, Bernstein, Peirce and Piaget: Some Highlights of a Convergence

The current convergence of complementary theoretical views regarding the nature of cognitive processes, conveys an evolutionary and developmental perspective, that localises thinking processes within organism-environment (including social) co-ordinations, i.e., as embodied and functionally embedded (by means of its sensory-motor cycles) in the world.

Peirce’s “Inferences theory” [47, 87], “Soviet functional systems” [5, 8, 15] and “Activity theory” [91]; Piaget’s “Genetic Epistemology” [8183] “Ecological psychology” [28, 42, 90] and the “Non-linear Dynamical Systems” theory [43], makes it now possible to advance dynamical models and to explore new experimental paradigms in cognition.

Some of the basic tenets of the embodied mind thesis have a long history in Western culture. These ideas can be found in early Greek medicine of the fifth Century BC. Hippocrates’ text “On the Sacred Disease” [40] and Aristotle’s book “De anima” (On the soul) [11] brings out a compelling account of that. Moreover, Lo Presti [58] has recently remarked that the ancient Greeks’ accounts of cognition are similar to the contemporary views provided by the so-called ‘embodied mind’ theories.

Later in the 17th century, Baruch Spinoza explains, in the second part (Concerning the nature and origin of the mind) of his “Ethica, ordine geometrico demonstrata”, in the proposition XVI, that humans can only have cognition throughout the modification of the human body due to the influence of the environment and vice versa by the modification of the environment due to the influence of the human body. Moreover in the note to the proposition VII, Spinoza declares the dynamic nature of thinking, pointing out that an idea can not be thought of by itself as a single act, i.e. an idea can only be perceived through another thought, which is again perceived through another thought and so on forming an endless chain [89]. A comprehensive account of modern embodied mind theories can be found in the book the “Embodied Mind” by Varela et al. [94].

Likewise Peirce, who was the first one to put forward an integrated and plausible theory, (anticipating cybernetics concepts by more than 40 years) that considered cognitive activities, like thinking as a dynamic, self-controlled semiotic process, affirms that; every thought is a sign that can only be interpreted by another subsequent thought-sign in a cognitive stream [74].

Moreover Peirce established the intrinsic reciprocity between the semiotic elements (Icons, Indexes and Symbols) and the inferences (Abduction, Induction and Deduction) carried-out in a process of enquiry:

Now, I said, Abduction, or the suggestion of an explanatory theory, is inference through an Icon, and is thus connected with Firstness; Induction, or trying how things will act, is inference through an Index, and is thus connected with Secondness; Deduction, or recognition of the relations of general ideas, is inference through a Symbol, and is thus connected with Thirdness [77].

Apel [10] and recently Magnani [67] have commented on this fundamental relationship as well.

Peirce’s idea that every cognition rests on inferences was taken from Kant, who advanced that: “there is no cognition until the manifold of sense has been reduced to the unity” [29]. Peirce’s inferences theory could be separated in two periods, firstly the syllogistic period followed by methodological period, that is related to enquire processes.

Peirce’s later theory distinguishes three basic kinds of inferences that are interdependent and work in an interconnected fashion: abduction, which is a twofold process of generating hypotheses and selecting some for working on them; deduction that draws out testable predictions from the hypothesis, while induction evaluates those predictions coming from deduction by comparing them with the actual results [87].

Notwithstanding Peirce, in his paper “A theory of probable inference” went beyond the logical aspects of cognition, observing the analogy between the structure of the syllogism and the elements and mechanics of the animal’s reflex arch. Habermas has shown convincingly that for Peirce, the “logical forms pertain categorically to the fundamental life processes in whose context they assume functions” [36]:

In point of fact, a syllogism in Barbara virtually takes place when we irritate the foot of a decapitated frog. The connection between the afferent and efferent nerve, whatever it may be, constitutes a nervous habit, a rule of action, which is the physiological analogue of the major premise. The disturbance of the ganglionic equilibrium, owing to the irritation, is the physiological form of that which, psychologically considered, is a sensation; and, logically considered, is the occurrence of a case. The explosion through the efferent nerve is the physiological form of that which psychologically is a volition, and logically the inference of a result [76].

Peirce juxtaposes vis-à-vis the three kinds of inferences according to the cyclic sequence of stimulus in the mechanic of the reflex arch:

Deduction proceeds from Rule and Case to Result; it is the formula of Volition. Induction proceeds from Case and Result to Rule; it is the formula of the formation of a habit or general conception–a process which, psychologically as well as logically, depends on the repetition of instances or sensations. Hypothesis proceeds from Rule and Result to Case; it is the formula of the acquirement of secondary sensation–a process by which a confused concatenation of predicates is brought into order under a synthetizing predicate [76].

From a physiological point of view, Ivan Pavlov noticed that the main adaptive feature of conditioned reflexes is its signalling character that gives rise to an anticipatory activity [73]. Krushinskii has commented, in extenso, about Pavlov’s views regarding these matters; and some special kind of “associations” that express the cause-and-effect relationship between stimuli at a particular moment, namely “extrapolation reflexes”. According to Pavlov, this type of association, that bears the causal relation of world events, “must be regarded as the beginning of the formation of knowledge, the forging of a permanent connection between objects” [48].

This insight, particularly regarding the signalling and anticipatory aspects of these kind of processes, is the core principle of Functional System Theory, developed later by Anokhin [4] as an improvement of Pavlov’s ideas. The notions developed by Anokhin provides a physiological and psychological substantiation to Peirce’s inferences theory [3, 8].

In a remarkable text, Anokhin departing from his FST; arrives at similar conceptions of the mechanisms involved in the creation of “meaning”, and of all kinds of searching behaviour (including enquire processes) to those proposed by Peirce and Piaget:

The conceptions expounded by us are of special interest in relation to the physiological analysis of specifically psychological conceptions. The conception of “meaning” in education and in perception of the outside world, for example, is obviously a variant of coincidence between the stored conditioned excitation and return afferentations, which are “meaningful” in relation to this stored excitation. All education proceeds with return afferentations playing an obligatory correcting role, and indeed it is only on this basis that education is possible. Every correction of mistakes is invariably the result of non-coincidence between the excitations of the acceptor of effect and return afferentations from the incorrect action. Without this mechanism, both the detection of the error and its correction are impossible. It can hardly be disputed that practically any acquisition of habits (speech, labour, athletic etc.) proceeds in the same way as was indicated in the schema for continuous compensatory adaptation. All forms of searching for objects are based on the features of the apparatus of the acceptor of effect. It would be impossible to “find” anything if the object sought did not not agree in all its qualities with the qualities of the excitations in the stored acceptor of effect” [4].

Anokhin, Bernstein, Peirce and Piaget, all share a common naturalistic idea, that is: that the centre of life is a self-regulating process and that there is a continuity and an isomorphism of the self-regulatory mechanism in all the goal directed activities of living organisms. This self-control organisation is present in all the hierarchical levels; ranging from the very basic vital functions to the highest manifestation of cognition, like in thinking and reasoning. Dewey, commenting about Peirce’s account of enquire process highlights the importance of this matter.

The term “naturalistic” has many meanings. As it is here employed it means, on one side, that there is no breach of continuity between operations of inquiry and biological operations and physical operations. “Continuity,” on the other side, means that rational operations grow out of organic activities, without being identical with that from which they emerge. There is an adjustment of means to consequences in the activities of living creatures, even though not directed by deliberate purpose [26].

The common source of the self-regulating notions of Peirce and Piaget can be found in J.M. Baldwin’s genetic theory of knowledge, (an author that inexplicably remains in oblivion) particularly his idea of psychonomics’ regulatory mechanisms [12].

Piaget’s theory of equilibration of cognitive structures stand in a close concordance to that of Peirce, Bernstein and Anokhin. In this framework the capacity to draw inferences is in the core of human cognition, performing a central role in the creation of knowledge and the equilibration of cognitive structures [33, 69, 82]. According to Piaget these self-regulating mechanisms consist of a combinatory system of anticipations and retractions “found at all levels from the regulations of the genome to those of behaviour” [80].

a certain number of other functions or functional properties are common bot to the various forms of knowledge and to organic life, in particular all those which used to be covered by the inclusive and imprecisely analyzed notion of finality until recent days, when cyberneticians have succeeded in supplying teleonomic (not teleological) models or mechanical equivalents of finality. Among these are the properties of functional utility, of adaptation, of controlled variation, and, above all, of anticipation. Anticipation is in fact, along with retroactions, one of the most generally found characteristic of the cognitive functions. Anticipation intervenes as soon as perception dawns, and in conditioning and habit schemata, too. Instinct is a vast systems of surprising kinds of anticipation, which seem to be unconscious, while the inferences of thought promote anticipation of a conscious kind, instruments that are constantly in use [79].

It is worthwhile to remark that the concordance between Piaget, Bernstein and Anokhin is tight; all of them could be regarded as precursors of cybernetics. They started with a similar critique to the Cartesian view of the reflex arch and from there they arrived to an integrative concept of “function”. It is not a surprising to find that Piaget’s central notions of: “action schemata”, “assimilation”, “accommodation” and “regulation” and Anokhin and Bernstein’s concept of functional systems are just like the two faces of Janus Bifrons.

In the past, psychologist as well as many physiologist used the term “association” rather than assimilation. Pavlov’s dog associates the sound of a bell with getting food and subsequently begins to salivate when hearing the bell, just as though the food were there. But association is only one stage, singled out artificially from the whole process of assimilation. The proof of this is that the conditioned reflex is not stable on its own and needs periodic confirmations: if you continue only to sound the bell without ever following it up with food, the dog will cease to salivate at the signal. This signal, therefore, has no significance outside a total schema, embracing the initial need for food and its eventual satisfaction. “Association” is nothing but a piece of arbitrary selection, a single process picked from the center of a much wider process (most people today realize how much more complex the conditioned reflex is than was at first thought: in neurological terms to the extent that it depends on reticular formation and not only on the cortex; and in functional terms by causing the intervention of feedbacks, etc.).

From the quote above it can be observed that when Piaget is referring to the “total schema, embracing the initial need for food and its eventual satisfaction” he is clearly outlining what Anokhin and Bernstein have depicted in detail as functional systems.

In the last writings of Piaget et al. [83] and Peirce [75, 85], inferences are portrayed as being beyond mere connections between predicates, as in traditional syllogistic; these authors present inferences in the form of coordinators that create meaningful implications inside and among different cognitive elements (actions, functions, operations) at different hierarchical levels. By these means, inferences regulate and drive the organism-environment system in a continuity from the very basic sensorimotor levels, like in unconscious innate reflex actions; to the highest goal-directed conscious cognitive activities like in logic-mathematical thinking.

The coincidence of both authors regarding the role and mechanics of inferential activity are in total agreement. Piaget in his book “Toward A Logic of Meanings”, talking about “meaning implications”, stated that:

Meaning implications are threefold in another way. Such action implications may take the following forms:

  1. (a)

    Proactive implications: They draw conclusions from the propositions involved; that is, they assert that if A -> B, the Bs are new consequences derived from A.

  2. (b)

    Retroactive implications: Instead of dealing with consequences, they relate to preliminary conditions and they express the fact that if A-> .B. then A is a preliminary condition for B.

  3. (c)

    Justifying implications: This form of implications relates forms (a) and (b) through necessary connections reaching the “reasons.” In other words, implications have a threefold orientation -Amplification bears on consequences; conditioning bears on preliminary conditions; and deepening brings out the reasons” [83].

To emphasize his agreement with Peirce, Piaget, in a footnote in the same book; explains that the distinction between proactive and retroactive implications was already made by Peirce who called them “predictive” and “retrodictive” implications. Or as Peirce used to call them deduction and abduction.

To summarise: Peirce [87] followed by Piaget [39, 83], depicted inferences as organism-environment, goal-directed, cyclic and self-controlled processes (in a continuum that spans from unconscious activities to conscious ones); that regulate the actions performed by the agent and the reactions from the environment (action results) by:

  • Guessing the status and reactions from the environment (Abduction /Hypothesis formation).

  • Anticipating by predicting, based on the former guess or hypothetical configuration, the status and reactions from the environment (Deduction).

  • Evaluating the result of the actions performed by the agent by comparing the former prediction with the actual parameter (Induction).

3.1 The Inferential Engine

The regulator aspect of inferences is not very noticeable at the sensorimotor levels but since a goal-directed behaviour is performed even the unplanned, spontaneous actions of the organism become part of an inferential cycle because these actions once performed produce a “result” that could be evaluated and corrected depending on their success or failure to reach a goal.

Following Peirce and Piaget, it can be postulated that a minimum cognitive organism-environment system, gives rise to an inferential process, taking the form of a functional unit i.e., an active entity composed by three interconnected moments namely: abduction, deduction and inductions that renders a goal-directed and self organised functional cycle.

The motor of this inferential cycle works like an organic Wankel engine, propelled by a three-sided rotary piston whose strokes are: abduction, deduction and induction. The movement of this engine is triggered by a cognitive need (problem) and departs from an initial state of rudimentary knowledge. The first stroke is an abduction that moves the cycle in the direction of creating or selecting (in further steps once the engine is running) a hypothesis. The second stroke is a deduction, that takes the hypothesis developing its formal consequences and generating a (testable) prediction that is carried out in the world, producing a “result”. The third stroke is an induction that evaluates the “result” of the practical trial i.e., the accuracy of the prediction, pushing the cycle into a new state of knowledge that will become the new input for the next cycle (see Fig. 2).

Fig. 2
figure 2

The inferential engine depicted like an organic Wankel rotary engine (Upper left). And the inferences cycles showing the sequence of abduction, deduction and induction during a cognitive task (Bottom right)

Once this engine is running it takes its own dynamical regime i.e., a singular spatio-temporal pattern, that is goal-directed, self-regulated and self-organised by its own results; working at its own pace according to its individual needs and levels of satisfactions, generating a dynamical figure in coherence and continuity with the state of the whole organism.

It is paramount in this account that the organism is coupled to the environment becoming a constitutive element in the cognitive cycle.

As a remarkable coincidence, an isomorphic functional architecture was experimentally unveiled by Anokhin and Bernstein in several physiologic processes and goal-directed behaviours, showing the ubiquitous character of these processes in living organisms TFS [3, 15].

To acknowledge the striking parallelism between Anokhin, Peirce and Piaget, it can be shown that the inferential engine, depicted above, could be directly mapped onto the TFS in the following way:

Inferences

Stages of operation of functional system

Abduction

\(\left\{ {\begin{array}{*{20}c} {Afferent\;synthesis} \\ {Decision\;making} \\ \end{array} } \right\}\)

Guessing

Deduction

\(\left\{ {\begin{array}{*{20}c} {Action \;acceptor} \\ {Efferent\;synthesis} \\ \end{array} } \right\}\)

Anticipating

Induction

\(\left\{ {\begin{array}{*{20}c} {Backward\;afferentation} \\ {Comparison \;results} \\ \end{array} } \right\}\)

Evaluating

The sequence of the stages of operation of a FS, contained in brackets, are functionally equivalent to the three kinds of inferences as were described by Peirce and Piaget. Based on this mapping and in the proposed definition of EF, it can be postulated that inferences are in the core of EF, playing the elusive role of the “executive”; forming an integrative, distributed and hierarchical control with the cognitive functions at the top.

As a corollary: it can be said that one way to assess EF is by assessing its inferential engine. In the next section it is shown how this task could be accomplished.

4 A Diagrammatic Turn: The Display and Modelling of Cognitive Anticipation

In his last paper, Bernstein, expressed the need of devising representative tools or procedures with regards to the analysis and representation of anticipatory models:

At present, the basic problem of primary importance for physiology of activity and even, perhaps, for all biocybernetics is the mathematical problem of displays (models, projections, images, and so forth).

The outlined theory of biological displays faces many other problems in addition to the general, perhaps principal problem of the analysis of models of the future and representing these models or codes [68].

The editors of Bernstein’s last paper remark the same kind of ideas, stating that “science has to think in terms of “images” or (maps) in order to understand the role of the brain”, advocating for a naturalistic theory of display. It is challenging that the insights of Bernstein are still waiting to be developed.

Anokhin, when commenting about the key value of the “result” of the action in a functional system, provided us with some penetrating cues stating that:

It is possible to represent the activity of the system and all its possible changes in terms of its results, this situation highlights the decisive role of the results in the behaviour of the systemFootnote 1 [2].

Keeping in mind Anokhin and Bernstein’s insights, it is our intention to show a way in which the representation, display and analysis of cognitive anticipatory behaviour could be accomplished. To do this, a set of principles outlined by Fugard and Stenning will be followed for the modelling of thinking and reasoning processes. These authors stated that the models must:

  1. 1.

    encode representational elements involved in reasoning and processes for their transformation, that is, some notion of algorithm;

  2. 2.

    include parameters which can be varied in order to characterise individual differences, for example, qualitatively different ways of thinking about problems or tendencies to reason one way rather than the other; and

  3. 3.

    the model must be grounded in data of people reasoning [32].

In two previous experimental research [50, 52] and using a microgenetic approach, it was possible to devise a diagrammatic model [20] and a related problem-solving task to study the dynamic of the inferential reasoning (deduction-induction-abduction).

The working model consisted of a problem task embedded in a 2D diagram that could be used: (a) by the participant as a scaffold to solve the problem and (b) by the researcher to expose the microgenesis of the thinking dynamics entangled in its solution [21, 66].

The problem task is analogous to one that was used by Piaget to study the children’s dialectical thinking [81]. In our case the implemented task is an adaptation of a well known board game called “Battleship” [52, 53] (See Fig. 3).

Fig. 3
figure 3

Abduction-deduction-induction model depicted on a battleship game board

Battleship is a popular worldwide guessing game for two players. The original objective of the game is to find and sink all of the other player’s hidden ships before they sink all of your ships. This requires the players to devise their own battleship positions while guessing that of the other player’s.

Our version of the game has been designed to be played by only one player at a time. In our case, the objective of the game is to find and sink four ships of different lengths (hidden in a board divided by a 10 × 10 grid) using the least possible shots, regardless of the time taken.

The game is a standardized computer version of Battleships that has been developed for our research. The full task includes eight individual games, each one defined by a standard template with the position of the ships [53]. Each participant is seated (beside the interviewer), in front of a computer screen running the game (See Fig. 4).

Fig. 4
figure 4

Experimental setup

The child is requested to verbalize the coordinates (letter-number) of their shots and simultaneously click the mouse pointer in the corresponding position on the screen. During the task completion the child receives visual feedback (in the computer screen) about the number of shots already performed, time passed, and their ongoing performance (amount of sunk ships). The total testing duration is approximately 20–40 min.

To fulfill the first modelling requirement outlined by Fugard and Stenning [32], an algorithm was built to render a standardised image-diagram that represents the inferential dynamics. The basic procedure is based on the sequential plotting of the series of (x,y) shot-coordinates and the time between each shot for every game played by the subject [50], (for some examples see Fig. 5).

Fig. 5
figure 5

Sample of images/diagrams representing the inferential dynamics for a group of ten school children (S01*–S10*, upper ten rows) and three algorithms (S11*–S13*, bottom three rows), N = 13. Each columns display one game of a series of eight

In the next subsection, it is shown how the second and third modelling requirement suggested by Fugard and Stenning was achieved.

4.1 A Trilogy of Examples: The Individual, the Group and the Sex

The examples presented here have been taken from our past studies, that were aimed to mathematically describe and classify different styles of thinking (inferential phenotypes) from children and adolescents. In order to achieve this, representative diagrams were created and their geometric measures (Euclidean and multifractal) were used as predictor variables (see Sect. 4). Regarding the classification methods performed, we followed the practise currently used in supervised machine learning modelling [22, 23]. For practical modelling purposes, R language and the R software environment were used [54]. The geometric measures, i.e., spectrum of Renyi dimensions and lacunarity spectrum were obtained using the package IQM Scientific Image and Signal Analysis in Java [1].

The first example was taken from a group of 10 school children aged between 10 and 12 years of age, as well as from three artificial algorithms, (N = 13). The purpose of this particular example is to illustrate the descriptive strength of the geometric measures and to show how they can be used to model their individual inferential dynamics and further classification.

The results of each game played by the children or algorithms were represented diagrammatically as shown in Fig. 5.

In this figure, the first 10 rows of diagrams represent the inferential dynamics of 10 children. Row 11 corresponds to the performance of an algorithm with a probabilistic optimised searching and hitting routine, plus a feedback correcting mechanism. Row 12 is composed by the performance of an omniscient algorithm, that knows the exact position of the ships in the board, but fires its shots randomly. Finally, row 13, represents the games of an algorithm that fires all its shots randomly without any kind of feedback.

The first interesting observation noted, was that the visual patterns of certain children, displayed a similarity which is conspicuously different to those produced by another child. Ultimately these patterns express the individual differences of the inferential dynamics from each child. This former observation could be supported by using some geometrical measures to describe the performance of the children.

When assessing the relationship between the geometrical complexity of the figures (for example, using the capacity dimension [Dq0] and the number of shots [NS] performed by each child), the statistical models that describe these relationships are very idiosyncratic and remain stable, despite of the improvement in performance due to a learning effect, (see Fig. 6).

Fig. 6
figure 6

Graph describing the relationship between the capacity dimension [Dq0] versus the number of shots [NS] performed by the children (S01–S10) and the algorithms (S011–S013). The coloured lines represent the fitted regression line for each child and the algorithms

To illustrate the discriminant power of the geometrical measures, a linear discriminant analysis (LDA) was performed using the following geometrical predictors: Renyi Spectrum [Dq0–Dq10], Cumulative Length, Number of Shots and the Lacunarity Spectrum [110], making a total of 23 measures, see Fig. 7.

Fig. 7
figure 7

Combined-groups plot for the canonical discriminant functions from the children (S01–S10) and the algorithms (S011–S013). The 73.0 % of original grouped cases were correctly classified using the geometrical predictors: Renyi Spectrum [Dq0–Dq10], cumulative length, number of shots and the lacunarity spectrum [110]

The results of the LDA showed that the set of geometrical measures chosen performed well, characterising and separating the children and the algorithms. The 73.0 % of original grouped cases, i.e., the individual games, were correctly classified. Notwithstanding at the level of individuals, a perfect classification of the children and the algorithms was obtained.

In general, when the inferences performance of each children is described by the diagrams, they display a sui generis graphical pattern, that becomes a kind of fingerprint of their inferential dynamic.

The second example was taken from an experimental situation, devised to test the ability of the geometrical measures, to perform group classifications in a diagnostic of accuracy paradigm. Human and artificial data was used. Two artificial data sets were created using data coming from ten children. The artificial anticipatory behaviour was simulated by two algorithms that modify the performance of these children, following two strategies: the first set of data, that we called robots, which was created by an algorithm that randomly shuffles all the shots, i.e., the “scanning” and the “hitting shots”. The second set of data, called “cyborgs” was created by an algorithm that randomised only the “scanning” shots, leaving the human “hitting shots” unchanged. The final data set (N = 30) is composed by the groups of ten children, ten “robots” and ten “cyborgs”.

A scatterplot of the capacity dimension [Dq0] versus the number of shots [NS] for the three groups is shown in Fig. 8, moreover a regression line is fitted to each group to visually highlight the differences among them.

Fig. 8
figure 8

Graph describing the relationship between the capacity dimension [Dq0] versus the number of shots [NS] performed by the three groups: cyborgs, humans and robots. The coloured lines represent the fitted regression for each group

To further assess the classification power of a set of geometrical measures, a logistic regression analysis was performed to separate human from non-human data (Cyborgs and Robots). The result of this analysis is shown using the receiver operating characteristic (ROC) curve in the Fig. 9, in which the fitted model, shows a good classification performance with an empirical AUC statistics = 0.94.

Fig. 9
figure 9

The empirical receiver operating characteristic (ROC) curve for the target condition nature (human, non-human [cyborgs and robots]), for the data fitted with a GLMM model with multivariate normal random effects, using penalized quasi-likelihood (glmmPQL) and for a set of multifractal and lacunarity measures. The empirical (AUC) statistics is 0.9401243

The third example was taken from a sample of ten adolescents to illustrate the discriminant power of a set of geometrical measures which allow the classification of the individuals by sex. To perform the classification, a GLMM model with multivariate normal random effects, using Penalized Quasi-Likelihood (glmmPQL) was fitted to a “training” data set of multifractal, lacunarity and MSSI measures [84].

To test the accuracy of the fitted model, a “testing” data set was used to obtain new cases to be compared with the predicted classes. The predictive performance of the model was evaluated using the (ROC) curve that can be seen in the Fig. 10. As it is illustrated in the (ROC) curve, the fitted model, shows a good classification performance in predicting the class of the cases from the “testing” data set. The empirical (AUC) statistics = 0.91.

Fig. 10
figure 10

The empirical receiver operating characteristic (ROC) curve for the target condition sex (male, female), fitted to the testing data with a GLMM model with multivariate normal random effects, using penalized quasi-likelihood (glmmPQL) and for a set of multifractal, lacunarity and MSSI measures. The empirical (AUC) statistics is 0.9166667

4.2 Final Observations, Prospects and Conclusion

The given operational definition of EF and the present method allows for the experimental study of human and non-human cognitive anticipation enacted by inferential processes. So far the results obtained appear auspicious; it has been possible to expose different “inferential phenotypes” by representing and analysing them geometrically.

An interesting finding is that the relationship between the geometric measures and the number of shots performed remains constant for each individual, (during the experimental period trialed) despite the learning effect across the tasks, becoming a sort of functional fingerprint of their thought dynamic, showing an identity through change. Additionally, these measures, due to their discriminative power, permit the classification of different inferential phenotypes.

Another interesting observation is that in general, the coefficient of determination R2 from the models that relate the different geometrical measures with the number of shots, seem to be higher for humans solvers than for non-human solvers (artificial algorithms or humans with artificially modified data). In the future this feature may give us a measure of the “humanness” of the fitted models. Paraphrasing Nietzsche’s aphorism, it could be said that the human cognition has something that is “human, all too human”. Moreover it can be hypothesised that in this particular experimental situation, the values of the coefficient of determination R2, could be used as a kind of metric and non-algorithmic Turing test [93].

From a medical point of view, the statistical strength and the reliability of the relationships found between the target variables and the geometric measures used as predictors, suggests that these kinds of measures, i.e.: Rényi’s spectrum, lacunarity and others [45], associated to the experimental paradigm could have a potential clinical use. This paradigm may allow for the quantitative description, classification, diagnosis, monitoring and screening of mental conditions that impairs EF. In this clinical scenario, the geometric measures could be used as quantitative imaging bio-markers [71, 86] to develop tests to facilitate the differential diagnosis in neuropsychology.

In a recent, unpublished preparatory pilot study, that followed these insights and using a combined strategy similar to what was exposed in the second and third example, we performed simulations with ADHD artificial surrogate data. The preliminary results provided a strong empirical support for this line of enquiry; showing the discriminant power of these measures to distinguish between children without the target condition and artificial simulated ADHD children. The ROC analysis performed displayed promising values of specificity, sensitivity and estimates between 0.91 and 0.94 for the area under the (ROC) curve (AUC) for the traditional statistical (multilevel logistic regression) models and machine learning classifiers [72] (Neural Networks, Support Vector Machine and Random Forest) [51].

Also as part of our former research, the same paradigm was used in a cognitive developmental setting. It was concluded that this approach could complement other methods used to evaluate and compare the evolution of cognitive phenotypes at different ages [53].

To conclude, it can be said that the given definition of EF, substantiated in this particular experimental paradigm (and the related models based on certain geometrical measures) could help us to increase our knowledge of the anticipatory aspects of human cognition.