Keywords

1 Introduction

The term ‘conformed thoughts’ (Laurentiz 2015, 2018, 2019) was created to describe how an internal set of codes, norms, algorithms, and standards are malleable by external factors. In everyday life, our actions are determined by habits and externalized factors. These are capable of modifying attitudes, behaviors, cultural practices that shape our thoughts. In creative AI, similar processes can occur: The visual forms generated by algorithms, say, (and in this context, we will be dealing with artificial intelligence/machine learning/deep learning), can be considered externalized thoughts. They use mathematical and statistical calculations for data analysis and model generation, and although invisible (generated inside a black box) they become externalized and therefore sensitive in a resulting image as they are fed back into the system. This leads us to consider that such ‘conformed thoughts’ do not have only formal characteristics—are not restricted to appearances—but it is an action that is determined by processes capable of modifying attitudes, behaviors, cultural practices that “in-form”, “re-form”, and “con-form” thoughts.

From the principle of “conformed thoughts”, we can make important considerations about representational aspects of creative AI. We can observe, for example, an encapsulation of the representational system starting from the “thing itself”, which is perceived and made an “object” in order to be communicated and shared (Deely 2004), and that is then synthesized into a “modeled object”, with the understanding that models are formed by objects, which in turn are “things themselves” that have been objectified.

In line with Flusser (2007), the passage from an “object” to a “model” carries levels of abstraction that would be important to note. In other words, the modeling and learning process, which we suggest is a meta-processing of/ for the generation of “conformed thoughts”, shows the relationship of the “thing itself” in these representational nests; even if the “thing itself” distances itself from the model, it is preserved somehow as a motivator and trigger of these processes. All this happens in a context where systemic feedback occurs, with processes of evaluation, transformation, comparison with reference values, adaptation, regression, codification, following the principles of cyber systems.

The digital system based on these processes appropriates these experiences and is guided by models and patterns that in turn will guide the results obtained. In these procedures, there is a new tension between “feelings and conformed thoughts”, and in an environment of mixtures of information and levels of abstractions, we find a potential to bring about new experiences, given this structural complexity. Faced with this scenario, artificial intelligence, machine learning, and deep learning are structures of/ for generating conformed thoughts.

This chapter intends to present some studies already conducted on this subject and to point out some consequences for language and thought. The main interest is to present a representational model promoted by information processing capable of discussing questions about computational algorithms—in particular, artificial intelligence—and the contribution of art in this process.

2 Justification and Previous Research

Humans make their own brain, but they do not know that they make it. (Malabou 2008, p. 1)

Several mobile games were released in 2012. One of them, by the American studio Big Duck Games,Footnote 1 called Flow Free, is distributed for free to this day. It is classified as an electronic puzzle game, in which the player must solve a problem using logical reasoning and a simple engine.

Flow Free is a game known as Numberlink,Footnote 2 which involves finding paths to connect numbers, dots, or colors in a grid. In this case, there is a grid of squares (organized in horizontal rows and vertical columns, as on a board) with colored dots occupying some of the squares. The goal is to connect the dots of the same color, creating a flow of “paths” between them so that the whole grid is occupied by such “connecting paths”. The difficulty is determined mainly by the size of the grid, which can vary from 5 × 5 to 15 × 15 squares, and the variation in the number of colored dots, which will define how many empty squares to go through/fill. The challenge is to connect the colors to cover the whole grid with the number of steps set as ideal.

Let us understand how it works. Consider an n × n matrix of squares: Some of the squares are empty, and others are marked by colored circles (green, red, yellow, blue, orange…). Each color occupies exactly two different squares in the grid. The player’s task is to connect the two occurrences of each color by a path of continuous lines made with horizontal and vertical movements; no two paths are allowed to cross. The game ends when all previously empty squares in the grid are filled by the created lines. Each path is measured by its difficulty in relation to the percentage of paths achieved. Each move is considered a step, and there is a best flow ratio (number of steps) to achieve. At first, there are valid paths that have already been predetermined so that they do not cross. In other words, initially, a solution to the problem has already been generated, and the player must find it. The idea is that we have a set of nodes related by reflexive and symmetrical equivalence, such that when “blue_1” is connected to “blue_2”, it is the same as “blue_2” being connected to “blue_1”, and the path between them will always be of equal length.

After a prolonged period of playing this puzzle daily, we had the hasty idea of comparing the results obtained as if they were visual enigmas deciphered. If we consider puzzles as problems to be solved by deduction, we are overestimating the process. It is about logical inference, certainly, as players perform heuristic research (in the computational enigmas sense),Footnote 3 and choose strategies based on previous experiences, have goals to achieve, and, for each heuristic, there is a pruning process that removes certain branches of the search tree if they cannot become a consistent option, deciding which branch to pursue. For example, “paths that cross” will be discarded, as well as “paths that do not fill all the grid square spaces”. Bear in mind that, while some grids require more effort than others, there is no creative action involved, and it is a predetermined, finite grid.

We authored a paper in 2018 about this experiment (Laurentiz 2018), in which we eventually realized that, since decisions were driven by previously learned repetitions, we already had evidence of learning. Moreover, what can be successively reapplied to the structures resulting from its earlier application is the triggering principle of any language, that is, the property of recursion.Footnote 4 After some time of this training by repetition, the grid configurations were memorized, and the mind predicted the next moves almost automatically, by reflex, depending on the mechanical agility and degree of difficulty of the grid. This means that, after a few levels, the player would learn trends, formats, and rules, thus getting better performance. From this point on, they start playing with the patterns and reacting from these configurations that have been memorized. For example, one would naturally first try to draw the outer lines with larger path lines and next solve the smaller central lines. Points that are far apart and resting on the edges of the grid are a foolproof clue to drawing a large line around the edge of the grid.

Our strategy was to find recognizable blocks—that we might call meaningful blocks—contours, paths, and repeated configurations that we had already learned. We would look first for easy-to-solve moves, i.e., those with obvious solutions (with the lowest percentage of moves index), for they were positioned in situations with no other options; then, we would try to complete the grid with the more problematic decisions. These were some strategies gotten by experience from repetition of previous moves. Tasks that require simple strategies and logical reasoning are exercised and repeated every turn. The point that interests us at this moment is the premise that repeating simple symbolic patterns from formal systems can generate traces of memories, trigger cognitive and sensory experiences, develop strategies, and promote changes in thinking (Laurentiz 2018).

This helps us, by analogy, to understand methods for generating training data, evaluating systems, modeling, and learning in AI algorithms. In other words, performing the same task to exhaustion can change our way of thinking. Recording all the experiences acquired also created opportunities for comparisons, references, identification, and recognition of blocks of meaning, and all this enhanced the process, considering that there is also a Web site for daily solutions of the game.Footnote 5 Now, multiply this by the countless logic games we currently have. We are training these “conformed thoughts” all the time. These self-controlled, self-contained, deliberate thoughts are everywhere. One only has to be somewhere public to notice that people are plugged into their cell phones, in many cases playing some of these puzzles. Add to that the current number of social network applications. What effect will this have on us?

We had already studied memory traces from Gestalt theory (Koffka 1975) in other articles (Laurentiz 2017a, 2018), but recent research speaks of “neuroplasticity”, a term used by neuroscience for the brain’s ability to change with experience and keep some of those changes. In this approach, researchers define the brain as a dynamic, adaptive, information-seeking system that is interconnected and networked (Vasconcelos et al. 2011; Eagleman 2020). David Eagleman explains that the brain needs repeated practice to learn an activity, which can be motor or cognitive. This practice expressively changes the brain configuration, so much so that “when medical students study for their final exams over the course of three months, the gray matter volume in their brains changes so much it can be seen on brain scans with the naked eye” (Eagleman 2020, p. 143). Therefore, since we are exhaustively practicing symbolic logics, especially in this period of 2019–2022, when our interactions and experiences are restricted almost exclusively to interfaces, screens, projections, and technical images. These logics are in themselves “conformed thoughts”, self-controlled and deliberate, the result of concepts, and knowledge models of a culture or group. Consequently, we are reorganizing ourselves, changing our habits, behaviors, cultural practices, and this will also cause changes in our way of thinking. Are we aware that this means that we are shaping our brains? Although it is always announced that machine learning networks learn from us, that we are the ones teaching them, is not it these “conformed thoughts” that are causing changes in our way of thinking? This means that both human and non-human systems are interconnected, and one interferes with the other.

There are several studies that relate cognitive activities to games (Laurentiz 2018; Baniqued et al. 2014; Oei and Patterson 2014; Nef et al. 2020), and although this subject still warrants further research to recognize a real cognitive pairing, “puzzle Numberlink games are promising as a tool to monitor the progression of motor impairment in neurodegenerative diseases” (Nef et al 2020, p.1), for example, and one can already suggest “future studies to create game-based adaptive computerized cognitive assessments.” (Ibid., p. 3).

Here begins our investigation, which is focused on studies of logic and general laws of signs. Considering that:

  1. 1.

    all thinking is accomplished through signs: “logic is the theory of self-controlled, or deliberate, thought” (Peirce et al. 1994, CP 1.191);

  2. 2.

    we have a brain capacity that allows changes from experiences and can retain certain changes;

  3. 3.

    pattern recognition methods and “conformed thought” processes feed back into our everyday activities;

the goal is to present a representational model promoted by information processing capable of discussing questions about computational algorithms—in particular artificial intelligence—and the contribution of art in this process.

3 Defining Conformed Thought

Initially, it is necessary to define what we are naming “conformed thought”. This concept was born from an attempt to relate different representational models based on Vilém Flusser’s “escalation of abstraction”. In particular, the author identifies a “new abstraction”, referring to processes of technical image generation from photographic apparatuses to computational models; this abstraction results from a process of imagination different from the one that mentally and/or handcrafted happens from “something to the image of this thing” and from the process of conceptualization to generate a “concept of the image of this thing”.

Something changes when a device starts to mediate this process, and later, when using computer models, argues Flusser, an imagination with a simulation character emerges. It is still called imagination, he explains, because the intention of generating images remains, but, in his understanding, it is a different kind of image. The images of the new imagination are projected by zero-dimensionalFootnote 6 processes and are the result of calculations, numbers, and computation. Flusser also says that it is as if “imagination had become autonomous”Footnote 7 (Flusser 2007, p. 173).

At this point, we begin to delineate what we call “conformed thought” (Laurentiz 2015, 2018, 2019, 2021). “Conformed thought” is the result of a process of image generation by this new abstraction, starting from an idea of thinking that is configured through a numerical logic, and that ends up generating a mathematical and statistical reordering of its own code. As Norval Baitello explains in the preface of the book Vilém Flusser: The Universe of Technical Images, “the escalation of abstraction […] is nothing more than an escalation of subtraction, it consists of the progressive removal of dimensions from objects, from three to two, to one and to zero dimensions”Footnote 8 (Baitello, in Flusser 2012, p. 10), in this path where the codes of representation go through processes of increasingly abstract symbolic systems. This abstractive retreat entails consequences, since “images are mediations between subject and objective world, and as such are susceptible to an internal dialectic: they imagine the objects they represent” (Flusser 2007, p. 166); with images projected by zero-dimensional calculations, one no longer confuses “what one imagines with what one has imagined”Footnote 9 (Flusser 2007, p. 172); aside from the fact that these “images do not hide their simulation character”Footnote 10 (Flusser 2007, p. 172). While images that mediate humans and their objective world are, in some way, copies of facts and circumstances, the images of this new abstraction do the mediation between calculations and their possible application in the surroundings, and this would indicate that these two imaginations walk in opposite directions (Flusser 2007, p. 173). Not only are they going in opposite directions, but computerized images can appear as if they were copies of circumstances, exactly like the images that used earlier processes, impersonating their predecessor representational models. Even so, computerized images will always carry a potential for situations belonging to a new field of possibilities. Flusser concludes by saying:

only when images are made from calculations, and no longer from circumstances (even if these circumstances are quite ‘abstract’), can ‘pure aesthetics’ (the pleasure in playing with ‘pure forms’) unfold; only thus can homo faber detach itself from homo ludens.” (Flusser 2007, p. 175).Footnote 11

This leap to the zero-dimensional is a boldness that forces us to “renounce causal explanations in favor of the calculus of probabilities, and we must learn to renounce logical operations in favor of propositional calculus.” (Flusser 2007, p. 176–177). It is noteworthy that he is referring to formal systems, which are calculations that represent formal objects, with the purpose of computing inferences, reaching goals, and that follow rules of an abstraction process from a set of a notation system that is also formal.

It is important to say that it is the representational aspects that concern us, and that our interest is exactly to try to understand this new abstraction and to insert it in a dynamic representational model. The main justification is that dealing with a zero-dimensional image would already indicate changes in cultural values, consequently possessing the ability to change habits and behaviors.

The term “conformed thought” was thus created to designate codes and set of codes, norms, algorithms, patterns, and interfaces. Hence, the forms generated by algorithms—and, in this context, we will be dealing with artificial intelligence/machine learning—are externalized thoughts, which use mathematical and statistical calculations (propositional logic) for data analysis and generation of models. These elements, although dematerialized, become sensitive in the resulting image when actualized. This leads us to consider that such conformed thoughts, whether internal (our mind also functions from “conformed thoughts”) or external, act in a determined way and “form”, “inform”, and “conform” thoughts.

From the principle of “conformed thoughts”, we can make important considerations regarding representational aspects. We can observe, for example, an encapsulation of the representational system starting from the “thing itself”, which is perceived and made “object” in order to be communicated and shared (Deely 2004), and that is then synthesized in “modeled object”, with the understanding that models are formed by objects, which in turn are “things themselves” that have been objectified (as we see in Fig. 1).

Fig. 1
The flow of a dynamic model using an apple as a symbol begins with a thing, is followed by an objecto, and finally is a model.

Source Author’s personal collection

Basic unit for the dynamic model.

In our view, in line with Flusser (2007), the passage from an “object” to a “model” carries levels of abstraction that would be important to note. In other words, the modeling process, which is also “conformed thought”, shows the relationship of the “thing itself” in these representational nests; even if the “thing itself” distances itself from the model, it is preserved somehow as a motivator and trigger of these processes. All this happens in a context where systemic feedback occurs, with processes of evaluation, transformation, comparison with reference values, adaptation, regression, codification, following the principles of cyber systems (in Fig. 2).

Fig. 2
Photograph of basic unit representation in cybernetic model which circulates between goal, transformation, emotions, surroundings and things in between input and output.

Source Author’s personal collection

Simulation from the cybernetic model for the basic unit of representation.

Moreover, since we are a part of this system and not just observers (Fig. 3), to say that conformed thoughts feed back into the system means that we share these processes in an ongoing relationship, even if we are not fully aware of it. Even though we do not have full control over these processes, they will interfere with our future decision-making in some way.

Fig. 3
Photograph of basic unit representation in cybernetic model in which each object is circulated in objectified model in between input and output.

Source Author’s personal collection

Passage from modeled objects to the objectified model in the simulation.

Following this reasoning, models, which are formed by “objects that have been modeled”, also objectify themselves so that they can be experienced by others, leading to the next level of abstraction, causing effects in these passages, and re-fueling the system again. These changes in levels of abstraction are significant because there are differences between the “things that are objectified” and the “models that are objectified”. This is the point, although it is a speculative argument, from a proposition based on the hypothesis that we are training thought forms from “conformed thoughts”, and this affects us even though it is not a response to direct stimuli from the “things” of the world (Fig. 4).

Fig. 4
The flow of a dynamic model using an apple as a symbol begins with a thing, is followed by an objecto, and model then continues. This represents the conformed thought's abstraction escalation.

Source Author’s personal collection

Conformed thought’s abstraction escalation.

From what has been presented so far, we can already anticipate that we will be sharing two structural systems:

  • the cybernetic model applied to a sign system—which would guarantee the intended dynamics between internal and external systemic actions;

  • and Flusser’s representational model of the “escalation of abstraction”—which would guarantee the perception of these passages between levels of abstraction.

“Conformed thoughts”, therefore, are actualized forms of elaborated and deliberate knowledge. This condition already brings “conformed thought” closer to the very definition of sign; however, it is a special sign.

A first distinction is that it is an abstraction from the Flusser’s technical image. This is an important fact, which already delimits a sign. Second, there is the approximation between “conformed thought” and Charles Sanders Peirce’s concept of symbol. Despite this obvious relation, we point out that every “conformed thought” is in fact a symbol, but not every symbol will be a “conformed thought”. For example, the word “house” represents a [house] symbolically, by convention of law; however, this is a process of conceptualization, of one dimension in Flusser’s approach, and therefore, will not be in our scope.

It is also worth noting that thoughts that are ‘Conformed’ are not restricted only to forms, appearances, expressions of patterns which are recognized in images. Initially, because we will also be considering in this list the concept of model—which in turn is formed by objects, and these are objectified things. Being a model already announces how “conformed thought” carries a conceptual and dematerialized charge. In addition, all thought is formed by distinct kinds of signs (cf. Peirce et al. 1994; Sebeok 2001). The type we are dealing with refers to the habit(s) acquired and formalized by a culture, which depends on the context in which it is embedded and is related to a notably computational technology.

We cannot fail to mention that there are various kinds of signs, which, although not recognized as “conformed thought’” also make up all thought (cf. Peirce et al. 1994). For example: vague formations of sensations, emotions, and feelings. This type is governed by our sensory system (including the sense of experience and observation) and drives our thinking in semiotic evolution, also promoting changes in habits and giving rise to new thoughts interdependent on each other, as demonstrated by Antonio Damasio, on the significant role of emotion in reason in the human brain (Damasio 1996). Thus, both the “conformed thoughts” and these other sign types of not so structured merge into an integral thought. More important than just recognizing this coexistence between signs of different nature is realizing that creative thinking will depend on this! Because the human mind acts from different processes and there are pre-interpretative states in “conformed thoughts”.

To better explain how forms of knowledge conform thought, even before they establish a de facto interpretive action, we will return to Flusser’s theory. For him, technical image is concept (Flusser 2011) and depends on a technological procedure of a period, within a context, determined by a society or group, and determines a way of seeing, from a point of view. We are not even evaluating the represented object (the diegetic object) of this technical image, nor the way it relates to the sign itself, which is presented in the image (a condition that occurs even in the face of an abstract form). Before any object in the image is recognized—or an abstract form is contemplated in its plastic and formal qualities—even before an interpretant is generated from this recognition/contemplation, we can already state that a technical image will bring a particular point of view, and this guides the new interpretations generated, in a vague and subjective way. That is, we are not only evaluating the sign’s plastic characteristics (in its quali, sin and legisign conditions, according to the Peircean vision), nor are we recognizing a relation between sign and object (immediate and dynamic object in the Periclean vision), but the fundamental point is to understand how certain characteristics conform the thought beforehand, even before an interpretant is formalized. It is to perceive characteristics of the immediate interpretant through the recognition of the structuring principles of the sign itself. It is a quasi-interpretation capable of causing signifying effects and provoking changes in habits even if we do not always realize it. And, later, to understand how these principles feed back into the systems—sensory and cognitive—that, evolutionarily and by circularity, generate increasingly complex systems of interpretations. We thus begin to design a model that approximates the proposed cybernetic model.

Advancing further, we must still recognize that:

  1. (I)

    Once conformed thought is actualized, in the very sense of taking shape in the world, it will have elements that will reflect its sensory aspects. Since things and signs, as well as experience and thought, are intertwined, conformed thoughts—codes, patterns, interfaces, etc., —are not only concepts, that is, abstractions of a certain degree, but also have sensible elements when instantiated in the world.

  2. (II)

    Every abstract thought (whether of first, second or third degree) has the power to generate an interference of some kind in the way we perceive the world. Therefore, conformed thinking con-forms, in-forms, and forms.

  3. (III)

    With the premise that there is a close relationship between signs and things in the world, we can extend this discussion by further recognizing that:

  4. (IV)

    Our relationship to the world depends on relationship to our surroundings, an expanded Umwelt (von Uexküll 2007, cf. Jakob von Uexküll) formed by a complex network of interwoven interpretations of things, objects (objectified things), and models (formed by objects, which are objectified things). In this proposal, models will be considered “conformed thoughts” when they are a zero-dimensional result of abstraction—understanding, therefore, that not every model is a “conformed thought”.

  5. (V)

    As Umwelt acts as an interface that selects and filters information from the environment, and we internalize this information in codified form, any material used by living systems in knowledge construction is representational (even if vaguely or in the condition of quasi-representation), that is, it is formed by a myriad of “some things” that represent “external things”, which are processed into a particular “kind of thing” of our Cognitive System (Peirce et al. 1994; Deely 1990). Moreover, this process feeds back into our sensory system (Albuquerque Vieira 2008).

  6. (VI)

    Consequently, sensations and “conformed thoughts” depend on each other (Damásio 1996), in the same way that actions of a body (Innenwelt), and the environment (Umwelt) in which it is inserted are also associated (Sharov 2010, 2012). It is also worth noting that Alexei SharovFootnote 12 introduces the principle of agent, to resolve the boundary between living and artificial systems in his “Functional Information” approach. Thus, living organisms are agents as much as non-living artificial devices. For the author, agents are broadly defined as “systems with goal-directed programmed behavior” (Sharov 2010, p. 1052). What would unite living and artificial agents “is their ability to perform functions for the purpose of reaching certain goals. Functions of agents are encoded and controlled by a set of signs which I call functional information” (Ibid.). And when we said that the “actions of a body” (Innenwelt) are associated with its environment (Umwelt), it should be understood as a body of an agent, as proposed by Sharov (2010, p. 1051).

  7. (VII)

    Finally, one understands life and semiosis (sign action) as coextensive (Sebeok 2001).

In fact, Fig. 3 itself can be considered an agent, since it is already a system with goal-directed programmed behavior (or quasi-agentFootnote 13), and that interacts with other agents, of distinct kinds (living or artificial systems), which would all act in this process (Fig. 5). Thus, the surroundings, the context, the environment, and the tensions to and from the externality are already being considered (see Fig. 3), and we also must consider that other agents act and interact dynamically with each other.

Fig. 5
A diagram depicts the interactions of different agents. The diagram contains 5 squares labeled agents, each pointing to the center figure labeled goal-purpose-objective.

Source Author’s personal collection

Complex system among agents of different kinds.

After this initial presentation, we can already begin to present the elements for the formation of our dynamic, generative, and time-evolving model. Diagrams 1, 2, and 3 illustrated the proposal in general and have already been presented in other publications (Laurentiz 2017b, 2019).

4 Proposed Model for Representational System Simulation

The main point so far is to realize how objects and models affect us and our emotions and conform the thought. And with the emergence of artificial intelligence and machine learning processes, which are conformed thoughts par excellence, we start to notice new triggers.

As we have seen, we go through differentiated processes of “objectified things”, “modeled objects”, and “objectified models”. When we “think and feel” from patterns, using “conformed thoughts”, and react from these abilities obtained through repetition, we are affected by these memories and their interfaces. So far, the model would behave similarly to the case of the Flow Free game analysis; we did in the Justification and Previous Research section. It is important to remember that, at that moment, we asked if we were aware of how we are shaping our brains, and we took advantage of the systemic feedback between agents and their surroundings to question whether we are the ones teaching the machines or if they are shaping our way of thinking.

Following this argument, a digital system based on learning and modeling processes, will be guided by models and patterns, which in turn will guide the results obtained by a machine that will feed the system back again. In this scenario, artificial intelligence, machine learning, and deep learning suggest a meta-processing of/for the generation of “conformed thoughts” (Fig. 6). Therefore, in our reasoning from Fig. 5, since we would again have new goals and strategies, we could consider it other agent (or quasi-agent), acting on the entire system.

Fig. 6
A diagram of meta-processing conformed thoughts contains goals and objectives down to evaluation, input and action, output, down to set goals and objectives, to goal, purpose, and objective, with disturbances or noises.

Source Author’s personal collection

Model (agent) inserted in meta-processing of/for the generation of “conformed thoughts”.

It is important at this point to pause to reflect on how computers perform cognitive tasks. For example, those that involve recognizing objects in images through computer vision, and how a computer program can name and detect objects in an image. To do this, a system must be based on visual concepts, with object descriptions, attributes, formal specifications, and structured relationships between the different elements detected in the image’s regions (Kiros et al. 2014; Krishna et al. 2017), in addition to the analysis, classification, evaluation, regression, and training procedures that will guide the learning process.

It is worth noting that machine learning algorithm is a mathematical model that maps inputs to specific outputs and feeds the model with pairs of “input + expected output” to train it and will adjust its internal parameters from that training. Since the algorithm itself goes through a process of self-adjustment determined by what it has “learned” during its training phase, correcting, adjusting, and fitting results to its purpose, it leads us to think of a metacognitive quasi-process.Footnote 14 With training completed, the program can be used with new data inputs and even be easily adapted to new situations. It would be hasty to suggest that the system would be consolidating its own memory traces, but we should recognize that there is a significant difference between an algorithm that solves one problem and another that may solve different problems, as in the case of classical algorithms and machine learning algorithms (respectively).

It is also important to highlight that, in machine training procedures, we have different strategies involved. The classification technique, for example, widely used in machine learning, groups things that are similar by parameters that satisfy some selection, norm, or law criteria. Discrimination processes, also commonly used, impose restrictions based on certain conditions and circumstances. Thus, we do not only have synthesis of objects transformed into models, but mathematical models that analyze, classify, and select models, which are formed by objects, which are things that have been objectified. In fact, there are classical algorithms within the machine learning algorithm, encapsulated algorithms, networks containing several coupled algorithms. This increases the complexity of these logical procedures and, in turn, the levels of representational abstraction. These new procedures should provoke a new tension between “sensations and conformed thought”, in an environment of mixes of information and levels of abstractions. In any case, by the very nature of statistical computation, one works with frequency of values, averages, and systemic trends.

Returning to Fig. 6, the initial model is now considered encapsulated in the system of Fig. 5. Now, modeling and learning systems (as an agent) will be able to adjust goals of the previous system. The main point is that the goal adjustments will be done by machine processes (zero-dimensional abstractions), the result of learning and modeling.

Despite this increase in complexity, the results we are getting with current machine learning systems show that we will have to revise and update our current bases and even generate new ontologies, from a more dense and complete set of descriptions about an image. Recall the ImageNetFootnote 15 case and Microsoft researcher Kate Crawford’s criticism that we are injecting our own limitations into the algorithms. In her article Excavating AI - The Politics of Images in Machine Learning Training Sets (Crawford and Paglen 2019), she highlights that making a machine interpret images is much more of a social and political issue, rather than merely a technical one. ImageNet demonstrated how these processes can promote discrimination, misjudgments, biases, and that the technical process of categorizations and classifications proves to be a political act. Therefore, the production of images made by a machine carries social, political, and economic issues based on the context in which they are inserted, their surroundings, and because of their condition as an “allopoietic system” (Nöth 2001, p.66).

The entire basis of machine learning systems is built on training sets, and it is these that underlie how the AI will recognize and interpret data from the world. Joy Buolamwini, who identifies herself as the Poet of Code,Footnote 16 working with facial analysis software, realized that the software could not detect her face because the people who coded the algorithm had not taught it to identify a wide range of skin tones and facial structures. After that, she faced a mission to combat bias in machine learning, the result of a “Coded Gaze”. Still according to Crawford, this is not an easy task since images are loaded with multiple senses and meanings. “Entire subfields of philosophy, art history, and media theory are dedicated to teasing out all the nuances of the unstable relationship between images and meanings” (Crawford and Paglen 2019).

In this sense, it is also important to know the trajectory of the research that culminated with deep learning (Kurenkov 2020), so we can understand how the generation of models in the computer takes place and has a better understanding of what a simple image is. In fact, according to Lev Manovich (2018), the challenge is to try to go beyond the search for types, structures, and patterns from already existing and recognized ways of seeing the world (which we understand here as “already conformed thoughts”).

In the presentation of the Flow game, we realized that if machine learning networks learn from us, what we teach them ends up causing changes in our way of thinking, which, again, denote those systems are formed by living and non-living agents that are interconnected, and one interferes with the other. Therefore, quasi-interpretive processes, sensations, and vague ideas must somehow participate in this system between agents, whether living or non-living systems.

5 The Contribution of Art

The dynamic model of representation proposed here is an interesting tool to understand which layers and procedures are being questioned and transgressed by artists. Let us look at the case of Shinseungback Kimyonghun, a Seoul-based artist duo consisting of Shin Seung Back and Kim Yong Hun, authors of the work Cat or Human (2013, at https://ssbkyh.com/works/cat_human/, accessed July 2021). The work is composed of two sets of one hundred photographs that use human face detection algorithm (OpenCV’s) and detection for cat faces (KITTYDAR) in an inverted form. So, Flicker Photos were used, and the program recognized human faces by the cat face detection algorithm, and cat faces were recognized as human faces by a human face detection algorithm. When the artist uses tools and techniques in unconventional ways and uses them for a function for which they were not created, they explore this field of possibilities. Albuquerque Vieira says that the artist explores fields of possibilities of his/her surroundings (Umwelt) and ends up perceiving sophisticated articulations of reality that follow criteria of organization and coherence, which are associated with an “aesthetic root” (Albuquerque Vieira 2010).

After the machine is trained to identify one type of object (human faces), it is presented with another type of object (cat faces), and vice versa. The provocation of the work is that there is recognition of the objects despite having been trained for other models. We can see, evaluating the proposal through Fig. 7a (an update of Fig. 6), that in this case, the machine would be trained to recognize apples and would be placed to recognize oranges, and in some cases, it would identify them as apples (Fig. 7b is an update of Fig. 1). It is evident that the strength of the cat or human work is the unusual use of inverted models between animal and human. Furthermore, it signals how these resources are flawed, make mistakes and are incapable of perceiving subtleties between things/objects/models, which a human agent would easily perceive.

Fig. 7
Two diagrams. Diagram A depicts the conformed thoughts workflow process, and B has an image of an orange labeled thing, 2 apples labeled objectified thing, and modeled object, respectively.

Source Author’s personal collection. b Inserted in Fig. 1 as an artistic strategy. Source Author’s personal collection

a Inserted in Diagram 6 as an artistic strategy.

Since our present time is insistently faced with what we call “conformed thoughts”, as we said in the beginning of this work with the example of games, social network applications, interfaces, and screens that we use to communicate in the confinement we are living in, these must be somehow altering and reconfiguring our brain plasticity. Consequently, actions of a body (Innenwelt) and environment (Umwelt) that are always associated participate in this process.

Another artwork by this duo is the Cloud Face, which is a collection of images of clouds that were recognized by the face detection algorithm as human faces. The proposal follows a principle like that of the previous work, but here the machine error is compared by the authors to human imagination, which recognizes figures in the clouds of the sky. Abstract shapes always surprise the human mind by the degree of openness to possible interpretations they have. Even if it is a mistake, a machine, recognizing objects in shapeless masses is also revealing in some way.

Another strategy is, for example, for the artist to train a machine and make it “lose” on purpose, what it has learned, triggering the very human capacity in which remembering and forgetting are accomplices and not adversaries in the thought process. This is the case of the video work entitled What I Saw Before Darkness (AI Told Me 2019Footnote 17) by an artist who simply goes by the name of “the girl who talk to AI”. In this case, the artist makes an intervention in the training and modeling agent of Fig. 6. She programmed an artificial intelligence to generate a human-looking face, then shut off its own neurons one at a time—a process she recorded and shared in a time-lapse video. A normal-looking face slowly takes on strange glitches—lines shift, colors change, and features blur until the face is no longer a face at all, replaced instead by blobs of brown and white that eventually fade to black.

Also impressive is the work by the same artist Grandmother of Man and Machine,Footnote 18 where the neural network processes images from the concept of “grandmother”. According to the Web site, “Approaches from neuroscience and computer vision helped to explore which traits of a grandmother the subject network grasped from all the millions of images it had previously seen”. And a series of images featured on the site express what a grandmother is to AI. If recognizing figures in shapeless masses (as in “Cloud Face”) by comparing formal similarities is something unexpected for a machine, but a machine recognizing shapes from concepts seems to be even more unusual.

The classic book What computers can’t do: a critique of Artificial reason by Hubert Dreyfus (1972) already pointed out strong reasons for the difficulties of programming intelligent activities by the computer. An issue that he said should be taken into consideration was the discrete nature of all computer calculations. There is also the fact that the human mind has flexibility and is able to perform creative actions to solve problems, and a machine does not (Dreyfus 1972). Already at that time, a proposal was made to think about systems that would promote a symbiosis between computers and human beings, because together they could accomplish things that could not be done separately.

In this sense, we reinforce the idea that human and machines should work together. The artist Sougwen Chung can offer us an important contribution. The artist, with her emblematic work On the collaborative space between humans and non-humans,Footnote 19 presents human–robotic performances that generate drawings collaboratively and arise from the relationship of a living system with an artificial one. The performances utilize one or several robotic arms, which respond to a variety of data inputs that the artist has been developing for some time. Her idea is to think about ways in which humans connect to mechanical and artificial systems, and vice versa, and which would function as a “creative catalyst”.

Very more relevant is how the artist says that she ended up adapting to the “inaccuracies of the machine”, which led her to readjust her own gestures. It is a paradox, since a machine would not have, in its nature, principles of imprecision. It is imprecise compared to the complexity of the drawing performed by the artist’s gesture, which the mathematically precise calculation and the flexibility of the robotic interface cannot achieve. It is important to say that the AI was trained on a second moment from the artist’s numerous drawings. At first, it was only the real-time data that was captured and processed during the performance, but then the system was implemented by a learning process from a set of 20 years of data retrieved from the artist, which reflected a certain “trend of her style” as she adapted to the movements of the robot, in continuous circularity. One can clearly see the feedback process between the two, and that is described in the project by the collaborative involvement of the creative action itself.

Finally, Cesar & Lois created a system that feedbacks actions from agents of living systems and artificial systems, in a hybrid process capable of triggering improvisations, mishaps, accidents, and new stimuli during performances, including imperfections to the process, as the case of their artwork called degenerative cultures. The authors call it “A Post-Anthropocentric Intelligence” and explicitly are “Corrupting the Algorithms Of Modern Societies” (Cesar and Lois 2018Footnote 20). These strategies are important escape valves for the pitfalls of “conformed thoughts”. The aesthetic outcome of all these procedures will then be relearned, re-evaluated, and reconfigure experiences through circularity and systemic feedback.

6 Final Considerations

From the above, we can consider two main points about the contribution of art to representational systems:

  1. 1.

    In the words of Ivo Ibri, “[…] simply contemplating the world, in a disinterested experience because it has no practical purpose, allows us to demobilize the conceptual forms that mediate our acting in the world”Footnote 21 (IBRI 2020, p.6). This makes the artist and art fundamental pieces of this puzzle of conformed thoughts, representational systems, and creative cognitive procedures. In a dynamic representational system, the artist with his/her artwork has the role of fine-tuning the process, expanding our sensitivity, and exposing the fragility of representational models. In other words, the artist is the agent who will adjust the model, launching other perspectives, extrapolating, and testing rules and structures, subverting standards, taking the system to its extreme exaggeration, or denouncing its inconsistency. At the same time, he/she expands the model, suggesting deviations, causing noise, and promoting unforeseen degrees of opening. In our point of view, the artist is a calibrating agent of the system.

  2. 2.

    “Soft representation”Footnote 22 is the term that Paulo Laurentiz chose in 1991 to designate an artist’s special attitude toward technology. This kind of representation implies that productive rules of technology are not being imposed on the world (Laurentiz 1991, p. 110). This means that, in terms of operative thinking, not to let there be internal interference of one language on the qualities of the other […] (Laurentiz 1991, p. 113), of the systems involved. Nothing is more current than thinking about artificial intelligence algorithms in this way. I end with a quote from Paulo Laurentiz:

The commitment of soft representation is not to mask or camouflage the information transmitted by the world, mediated by the organizing rules of signs. It discredits the authoritarian and arbitrary character of the sign, and at the same time, seeks to highlight another side of representation that despises the servile imitation of the sign of the real (Laurentiz 1991, p. 130).Footnote 23