Keywords

1 Introduction

Human creativity is often defined as the ability to generate novel and valuable ideas, whether expressed as concepts, theories, literature, music, dance, sculpture, painting, or any other medium of expression (Boden 1998). But creativity does not occur in a vacuum, it is a situated activity embedded within cultural, social, personal, and physical contexts that determine the nature of novelty and value against which creativity is assessed (Lindqvist 2003; Csikszentmihalyi 1999). This chapter explores the relationship between creative AI, embodiment, and performance through the lens of our creative practice.

The term ‘creative AI’ potentially covers a broad spectrum of activities from the use of AI as a medium of expression for human creativity (see Audry 2021) to the modeling of creative behavior in computational systems (see Bown 2021). At one end, creative practitioners including artists, musicians, writers, and designers work with AI as a medium, tool, or socio-cultural phenomenon to facilitate, amplify, or inspire their creative practices; as generations of creative practitioners have done in light of technological innovations (see Burnham 1968). At the other end of the spectrum, researchers attempt to construct computational models of creativity as a means of understanding the phenomenon and to engineer useful systems; for the purposes of this chapter, we consider this meaning of ‘creative AI’ to be synonymous with ‘computational creativity’ (Veale and Cardoso 2019).

The story of the development of automata from antiquity through to the modern era is littered with the imaginings of machines exhibiting creative abilities, from the self-playing instruments described by Aristotle and the Ingenious Devices of Al-Jazari to Jacques de Vaucanson’s Flute Player and The Draftsman of Jaquet-Droz, for a detailed account see Husbands et al. (2008). This fascination with mechanical performers, coupled with shifting conceptions of creativity from a divine or mystical notion to a subject amenable to rational inquiry, naturally led to the discussion of machines exhibiting creative thought. Ada Augusta Countess of Lovelace in her translation of Luigi Manabrea’s Sketch of the Analytical Engine commented on the possibility of the algorithmic composition of ‘elaborate and scientific pieces of music’ (Lovelace 1843) but made clear that any credit for producing creative works would be due to the engineer not the machine. A century later, at the dawn of the modern era of computing, Alan Turing reframed the countess’ comment as Lady Lovelace’s Objection to machine intelligence; that a machine would be incapable of taking its engineer by surprise (Turing 1950). Turing responded by noting that computers often surprised him, due to a faulty understanding on his part and the complex nature of the processes involved. He also explored the possibility of a machine that could organize itself as the result of its ‘experiences’—a learning machine capable of distancing itself from its engineer.

The proposal for the Dartmouth Summer Research Conference on Artificial Intelligence (McCarthy et al. 1955) included the modeling of creative thinking as one of the ‘grand challenges’ facing the nascent field. Attendees went on to develop the first examples of creative AI; discovery systems capable of reproducing findings of eminent scientists (Langley et al. 1987). Such discovery systems were later criticized for their lack of autonomy because they required significant amounts of a priori knowledge and often relied on human supervision to determine when a creative result had been achieved (see Lenat and Brown 1984). As the field of AI matured, researchers focused their efforts on solving well-defined sub-problems of intelligence, e.g., classification, planning, or theorem proving (Colton et al. 2009). Spurred by developments in the cognitive sciences, however, a renewed enthusiasm for computationally modeling creativity saw the establishment of the field of computational creativity in the 1990s, which again seeks to uncover suitable algorithms and knowledge structures to support creative behavior (Veale et al.2019). As Guckelsberger et al. (2021) discuss, however, there has been a significant lack of studies concerning questions of embodiment in computational creativity.

Our creative practice sits along the spectrum of creative AI; we attempt to use models from computational creativity to produce robotic artworks. Dealing with the ‘messiness’ of the real world in robotics, however, has frequently raised questions about situatedness, embodiment, and the performance of creativity, i.e., the performative nature of the creative act of generative meaning-making in the human–machine encounter. Consequently, our creative practice has shifted our understanding of computational creativity away from the algorithmic perspective; de-emphasizing the development of generative systems capable of producing novel and valuable concepts and focusing our attention on the development of machine performers that skillfully participate in the enactment of creativity within, and with, a physical and social environment.

2 Background

Traditionally, cognitive science has considered cognition as computations over mental representations (e.g., Fodor 1975). This approach asserts cognition consists of abstract processes that mediate between internal representations of sensory inputs (perception) and motor outputs (action). In contrast, embodied cognition is a program of research centered around the key assumption that the body functions as an active constituent of cognitive processes rather than a passive perceiver and actor serving the mind (Shapiro 2007; Leitan and Chaffey 2014). Embodied cognition had a profound influence on computational sciences including robotics (e.g., Brooks 1990; Clancey 1997) but has yet to have a significant impact on the development of computational creativity (Guckelsberger et al. 2021). More broadly, Malinin (2016) argues that until recently research in human creativity has often overlooked the importance of embodiment. Glăveanu and Kaufman argue that the roles of the immediate physical and social environment have been ‘blind spots’ in creativity research (Glăveanu and Kaufman 2019), and it appears that the computational modeling of creativity has inherited these ‘blind spots’.

Newen et al. (2018) identify different strands of embodied cognition research—embodied, embedded, extended, and enactive cognition—collectively known as ‘4E cognition’. A cognitive process is embodied if it relies on the body in a non-trivial way. Proponents of embodied cognition argue that much of human cognition encompasses both the mind and the body. A cognitive process is embedded if it relies on the physical, social, or cultural environment in non-trivial ways. Proponents of embedded cognition note that people often exploit features of their environment to increase their cognitive abilities. Proponents of extended cognition argue for a strong form of embedded cognition such as may be experienced by a person skilled in the use of a tool such that it becomes part of their cognitive apparatus. A cognitive process is enactive if it relies on an ability or disposition to act. Proponents of enactive cognition argue that a person’s cognitive abilities are dependent on their interactions with the world.

Creative activity can be understood from this embodied, embedded, extended, and enactive perspective, as evidenced by Csikszentmihalyi’s study of the daily habits of more than 100 exceptional creative people from diverse fields (Csikszentmihalyi 1996, p. 127). Ingold (2013) found that an artisan’s craft is not simply a consequence of physical skill but emerges within a system of relationships and interactions situated in a material environment. Similarly, Glăveanu (2012) studied the creative process of perceiving, exploiting, and generating novel affordances as part of a socially and materially situated activity of traditional Easter egg decoration in rural Romania. These studies highlight the embedded nature of creative activities and foreground the complex dialog with materials that practitioners engage in. Malafouris (2008), for instance, observed that a potter makes ongoing, second-by-second decisions on how to respond to the material agency of the clay. Yokochi and Okada (2005) highlight the enactive processes unfolding through a traditional Chinese ink painter’s hand movements as they explore and perceive, where, what and how to paint next.

Embodied cognition provides an empirically supported framework for understanding existing creative practices grounded in the body (Kirsch 2012), and a foundation for the development of new embodied practices, like our own, as we will explore later. Malinin (2019) identifies two emerging streams of empirical research in embodied creativity:

  1. (1)

    studies of embodied metaphors associated with creativity, e.g., studies of free walking as enacting thinking outside of the box (Leung et al. 2012), and

  2. (2)

    studies of creativity as an emergent phenomenon of dynamic systems, e.g., music as an emergent, dynamic interaction among musicians and instruments (Schiavio and van der Schyff 2018).

In general, however, Malinin (2019) argues that the traditional cognitivist separation of mind and body manifests in creativity research as disjoint views of creative cognition and creative action. This disconnect between the realms of ideas and actions is frequently mirrored in the computational modeling of creativity (Guckelsberger et al. 2021).

3 The Embodiment of Creative AI

Creative practitioners have always understood and experimented with the meaning-making potential of embodiment and movement in computational systems, as demonstrated by the history of cybernetic and robotic art (see Kac 1997; Penny 2012). For example, Gordon Pask’s The Colloquy of Mobiles (1969) performed a dynamically evolving mating ‘dance’ between five ‘mobiles’—three female soft fiberglass shapes and two male aluminum rectangles (Pickering 2010). Open to interference from visitors using mirrors and flashlights, the mobiles engaged in a dynamic performance as an endlessly emergent cycle of relations, meanings, and desires; a conversation across non-humans and humans (Fernandez 2008). Edward Ihnatowicz’s cybernetic works anticipated the embodied approach of behavior-based robotics (Brooks 1990) by almost two decades. SAM (1969) and The Senster (1971) implemented a small set of simple behaviors that, when placed into the environment of a busy gallery, combined to produce complex social interactions (Zivanovic 2005).

3.1 Curious Whispers

Curious Whispers (2010–2011) was our first attempt to study embodiment in computational creativity (Saunders et al. 2013). Curious Whispers consists of three mobile robots—each equipped with a speaker, a microphone, and a movable plastic cover—and a three-button synthesizer, see Fig. 1. In operation, each robot performs simple melodies using three tones and listens for melodies performed nearby with the same three tones. When rehearsing melody variations, a robot makes use of its embodiment by closing its plastic cover allowing it to use the same hardware and software to evaluate melodies performed by itself and others.

Fig. 1
A photo of the Curious Whispers. It has three mobile robots each having a speaker, a microphone, a movable plastic cover and a three-button synthesizer.

Curious Whispers (2010–2011)

The robot hardware consisted of a 3pi robot base, two microphones, a speaker, and a servo. The robot’s software ran on the 3pi’s AVR ATmega328 microcontroller. After a melody is heard, it is assessed against a small set of recently heard melodies, where each note is stored as a frequency and duration, to determine its novelty on a note-by-note basis. If a melody is determined to be novel, it is added to the set, replacing the oldest melody if it has reached its maximum size. Exposure to novel melodies switches the mode of the robot from listening to rehearsing, if the melody came from another agent and from rehearsing to performing, if the melody was played by the robot. In rehearsal mode, a simple genetic algorithm is used to generate new melodies based on its set of previously novel melodies and played directly through the speaker to ensure that the internal representation reflects what is heard by the microphones.

Curious Whispers attempts to embed the robots within a wider social environment by providing a ‘level playing field’ between human and artificial agents using a three-button synthesizer and a simple interaction policy; if a robot considers a melody played using the synthesizer to be novel, it will be adopted. Closing its cover signals to human participants that the robot has adopted a melody and switched from listening to rehearsing. Using this simple interaction, human interactors introduce situated domain knowledge.

3.2 Zwischenräume and Accomplice

Our robotic installations Zwischenräume (2010–2012) and Accomplice (2013–2014) explore the affective-material potential of robotic systems that evolve based on the emergent cycles of relations that their material embeddedness gives rise to. Zwischenräume (2010–2012) was our first robotic installation; it embeds a pair of gantry robots into the architectural fabric of a gallery; sandwiched between a gallery wall and a temporary wall that resembles it. Each robot is equipped with a camera, a motorized tool, and a microphone. The control system for each robot combines machine vision and a model of intrinsic motivation (Gemeinboeck and Saunders 2013). Movements, shapes, sounds, and colors are processed using machine learning to allow each robot to develop expectations of their environment and the consequences of their actions. Multiple self-organizing maps (Kohonen 1995) are used to determine similarity between images taken by the camera based on shape, color, and optical flow. Dissimilar images provide a reward signal based on their degree of novelty, using a non-linear ‘hedonic’ function based on the Wundt Curve (Berlyne 1960), which maximizes reward for inputs with moderate levels of novelty, i.e., inputs that are similar-but-different to the previous inputs. Reinforcement learning (Watkins 1989) is used to develop strategies for moving about the wall and using the tool. Prediction errors between learned models of consequences and observed results are used as a measure of surprise.

Embodiment provides opportunities to expand the robots’ behavioral range by taking advantage of properties of the physical environment, i.e., the wall. The robots are not equipped with an explicit representation of the wall’s material, instead they learn to recognize salient features in the environment to support their learning. In this way, the computational model implemented in the robots is embedded in its immediate physical environment. The robot’s capacity to act evolves over the course of the exhibition based on interactions with their surroundings, resulting in an enactive coupling of the sensorimotor loop, the physical environment, and the learning that drives behavior. The robots coordinate their movement, gaze, and tool activation to produce novel features in their environment to generate learning rewards. The result of their intrinsically motivated learning is a feedback process, which increases the complexity of the robot’s environment relative to their perceptual abilities. Over time, sequences of movements and knocking actions develop into a behavioral repertoire of skills grounded in the robot’s embodiment that can produce perceived changes in terms of color, shapes, and motion.

We explored different embodiments of Zwischenräume over multiple iterations between 2010 and 2012, most notably by equipping the robots with different motorized tools including a hammer, a chisel, and a punch. Using the same controller for each iteration, we could observe how the robots’ behaviors were contingent on their embodiment as they learned to affect change in different ways. The coupling of the robots with their physical environment provided a simple model of enactive cognition; the robots learned to re-sculpt their physical environment through their actions and the perception of their consequences. Figure 2 is a collage of images taken by a single robot during an installation. Each image was taken when the robot discovered something ‘interesting’, i.e., when the hedonic function returned a reward higher than a predetermined threshold. Figure 2 illustrates how the evaluation of ‘interesting’ evolved over time and was affected by (a) positioning of the robot and camera, e.g., the discovery of lettering on the plasterboard wall; (b) use of the tool, e.g., the production of dents and holes; and finally (c) interaction of visitors. Visitors to the gallery were frequently a source of novelty for the robots, and the robots attended to them, as illustrated in the final row of Fig. 2, until the colors, shapes, and movement of visitors were learned and so ceased to be sufficiently novel to be interesting.

Fig. 2
A collage of the images depicts the improvement of the robot's view installed in the gallery. The last row has a perfect picture of the visitors taken by the robot.

Evolution of a robot’s view in Zwichenräume

Accomplice (2013–2014) builds on Zwischenräume; like Zwischenräume, it encases robots into the walls of a gallery. Unlike Zwischenräume, the robots in Accomplice can move such that they have overlapping areas of operation. Sharing an immediate physical environment with other robots permits the indirect coordination of actions. Areas of common action become frequently visited by robots as repeated sources of learning rewards, modeling a form of physically embedded, social coordination through action. The robots communicate directly through knocking patterns of their motorized tools. Each robot evaluates the knocking pattern rhythms of other robots against its own and selects novel rhythms for its own actions. The collective result of sharing of rhythms is the use of similar knocking patterns across the gallery that change over time as the rhythms are performed, selected, and varied by each robot. The robots’ embeddedness in a social space is materialized as a dynamic soundscape in the gallery.

In both Zwischenräume and Accomplice, the performance of the robots is shaped by their curious disposition, the drive to seek novelty, continuously expanding their behavioral envelope. Performance here is not about re-performing an existing script but rather emerges from the system’s ongoing evolution, situated in and in-interaction-with its environment. Learning and adapting then are not goal driven but are based on what they discover and interpret as ‘interesting’. The seemingly passive wall and its material capacities, resisting or accelerating the machines’ eager work, play an important role in the unpredictable evolution of this performance and the emergence of agency across computational, mechanical, and physical systems. The installation thus acts out a particular ecological niche—a dynamic co-mingling of processes, matter, and things, while foregrounding the affective potential of nonhuman, socially behaving, creative agents. In the next section, we take a closer look at this entanglement of embodiment and performance in our current practice.

4 The Performance of Creative AI

While Zwischenräume and Accomplice embodied and embedded machines that demonstrated an enactive cycle of learning through doing, our collaborative project, machine movement lab (MML), makes explicit our understanding of creativity as a distributed process, scaffolded by the performer’s skills. The performance of creative AI here refers to both (1) the performativity of the creative act as distributed across the robot, other (e.g., human) agents, and the situation they are embedded in, and (2) the performance of the robot as a ‘skillful participant’, which scaffolds the performativity of the creative act.

Bringing together creative robotics, dance performance, and machine learning, MML is grounded in a performative framework to explore the enacted nature of creative agency and meaning-making. Questions of agency have been identified as central to the recognition of creativity in computational systems (Guckelsberger et al. 2017), and commonly, agency is understood as an attribute built into the system. Our experience of developing embodied systems, however, has shifted our focus from endowing robots with creative agency to that of developing skillful participants in the distributed enactment of agency. Rather than invested with agency, a machine as a ‘skillful participant’ engages in and facilitates the emergence of creative agency between machines, other agents, and their environment. From a performative viewpoint, agency is not a property that can be possessed but rather ‘is a matter of intra-acting … an enactment’ (Barad 2007, p. 178). MML builds on Barad’s concept of intra-action to develop a performative approach to human–robot interaction through material, performance-based inquiries into the situated enactment of human–robot relations.

In contrast to many approaches to human–robot interaction that focus on relation-making with humanlike robotic agents, MML looks for differentiated starting points for the making of human–robot relationships by investigating the relational performative potential of abstract machinelike artifacts (Gemeinboeck 2021). Thereby, the performativity of the creative acts as part of the meaning-making in the interactive exchange has driven every aspect of our design process, including the robot’s mechanical design in tandem with developing its behavioral language, learning, and improvisational capacities. Looking at design not as a method for developing an autonomous, creative agent but rather a socio-culturally situated, material process for scaffolding a robot’s social and creative skills deeply integrates our understanding of embodied cognition with our design approach. Much of this integration has been driven by our collaboration with dancers and their bodily ways of knowing, which has allowed us to explore material, social, and cultural interrelations, and how they can mobilize enactments of creative meaning-making.

Our approach revolves around a novel performative body mapping (PBM) methodology, which involves dance performers wearing a robot costume to corporeally entangle with and ‘feel into’ a machine’s different embodiment with its unique spatial-relational affordances and affective potential. Theater and performance have a history of using costumes to interfere with performers’ bodies and their performance (see Suschke 2003, p. 205). Combining ideas from theatrical costume and demonstration learning, the PBM costume facilitates dancers’ ability to bodily probe and kinesthetically extend into the robotic embodiment. The costume provides a material interface between human and robot bodies and their differing movement capacities, such that a performer can apply their embodied knowledge and their socio-culturally embedded understanding of movement. It allows the dancer to learn how to embody the machinic form and move with this unfamiliar embodiment (see Fig. 3) and for the robot to learn from the dancer-in-costume by imitating the recorded movements of the PBM costume.

Fig. 3
A collage of the images of Audrey Rochette in the P B M costume. The robot learns from the dancer in costume by imitating the recorded movements of the P B M costume.

Audrey Rochette in the PBM costume. Image copyright of the authors

MML explores how the relational, enactive potential of movement scaffolds a robot’s ability to participate in the dynamic, creative processes of the social encounter. Movement in robotics is commonly a matter of safely navigating space, whereas human–robot interaction design also employs movement and its qualities to imbue robots with an expressive character or personality. Movement here serves as a medium for ‘accurately expressing the robot’s purpose, intent, state, mood, personality, attention, responsiveness, intelligence, and capabilities’ (Hoffman and Ju 2012, p. 91). MML, in contrast, understands movement as a dynamic, relational phenomenon, unfolding through ‘spatial, temporal, and energic qualities’ (Sheets-Johnstone 2012, p. 49), whose generative potential can drive the relation-making dynamics of an encounter. Understanding movement, both performatively and from the perspective of phenomenology and embodied cognition, we bodily participate in the generation of meaning, ‘often engaging in transformational and not merely informational interactions; [we] enact a world’ (Di Paolo et al. 2010, p. 39). As we enact and experience meaning through movement, we also make sense of other bodies by resonating with them and their movements (see Fuchs 2016). In Fuchs and Koch’s words, ‘one is moved by movement […] and moved to move’ (Fuchs and Koch 2014, p. 1). Bodily feeling into the asymmetric relational potential of a robot’s different embodiment enables the dancer to bodily resonate with it. This ‘intra-bodily resonance’ (Froese and Fuchs 2012, p. 212) then gives rise to a hybrid movement language, resulting from the dancer moving with or as part of the cube form without relying on expressions of inner states. Our performers frequently use mental imagery of nonhuman dynamics (e.g., that of a pressure cooker, melting, or heavy rain) to guide their search for new movement patterns and the body reconfigurations they require, together with attending to the costume’s material affordances.

PBM harnesses the embodied expertise of dancers to inform every aspect of our process from the initial form-finding stage to the robot’s movements and behavior. Instead of starting with a pre-defined form, PBM begins with an exploration of the agential potential of movement by collaborating with dancers to bodily investigate the performative potential of a wide range of materials and shapes. As dancers inhabit a variety of geometric forms made from different materials, form-finding unfolds along creative alliances between movement and materials and their emergent meanings, rather than a set of pre-defined social functions that reduce a robot’s embodiment to a physical container (see Ziemke 2016). Views of machinelike robots lacking ‘emotional displays’ because they cannot ‘express human facial expressions’ (Hegel et al. 2009, p. 173) overlook the affective, agential capacities of situated movement. Entangling a dance performer with the unique spatial-material affordances of a becoming-robot allows for its dynamic, relational capacities to arise from a hybrid (human-nonhuman), interior perspective (Gemeinboeck and Saunders 2021).

The first robot prototype we realized using PBM has the shape of a cube. With its regular, omnidirectional geometry, a cube cannot be mistaken for a living ‘thing’ but instead offers a suitably blank canvas for dynamic relation-making. A dynamically or delicately moving cube, suddenly tilting up along one of its edges, gently swaying or rambunctiously thumping onto the ground, quickly loses its rootedness and transforms into something other than a familiar object. The mechanical design of the robot, referred to as cube performer, was derived from an analysis of the recorded motion patterns and their relational effects (Saunders and Gemeinboeck 2018). The PBM costume allows us to capture kinetic dynamics of a wide range of amplitudes; our goal for the machine learning as part of PBM is to utilize these dynamics to render the cube performer a highly skilled participant in the relational exchanges unfolding in a human–robot encounter without inscribing them directly onto the robot. Our approach builds on demonstration learning, also known as robot programming by demonstration (Billard et al. 2008), which involves a human demonstrator recording movements using motion capture that a robot learns to imitate from the captured data. A significant challenge when using this method is that it requires mapping between different embodiments, including different body shapes, sensorimotor capabilities, and movement repertoires (Dautenhahn et al. 2003), sometimes referred to as the ‘correspondence problem’ (Billard et al. 2008).

Our objective is for the robot to move according to its own abstract machine embodiment, while being ‘seeded’ with the movement qualities, textures, and nuances that support social sense-making. With respect to machine learning our challenge was thus to provide the necessary scaffolding for intra-bodily meaning-making informed by both the recordings from the PBM costume as well as the robot’s own machinic embodiment. During the learning process, the robot engages in an embodied form of social learning, similar to what Kirsch describes as a ‘sketch in dance’ (Kirsch 2012). The term ‘sketch’ is used to highlight that imitated movements will inevitably be variations, due to differences in skill and the specifics of the embodiment. Recognizing and tapping into the differences of the machine’s embodiment are at the core of the project. Hence, rather than looking at the robot’s body as a mobile container, the machine learning approach has been developed in tandem with the robot’s embodiment and capacity to move. The following outlines the three machine learning phases; grounding, imitation, and improvization.

In the grounding phase, the robot learns an initial movement repertoire, informed only by its unique physical embodiment in response to sensed environmental affordances. We deploy an ‘illumination algorithm’ (Mouret and Clune 2015), which allows a robot (in a simulated environment) to ‘discover’ its own body and possible kinesthetic relations in response to environmental affordances. The machine learner develops this repertoire by ‘illuminating’ a space of behaviors to find multiple possible ways of moving, rather than searching for a single optimal solution. We use a variation of the MAP-Elites (Mouret and Clune 2015) illumination algorithm, which combines the evolution of robot controllers with a dimensional reduction algorithm, e.g., an autoencoder, to define the behavior space being illuminated (Cully 2019). Through this active self-exploration, the robot begins to generate a movement repertoire unique to its physical form.

In the imitation phase, we bring together the repertoire of movements generated in the grounding phase with the repertoire captured from the dancers inhabiting the costume (PBM costume). Drawing from these two data sets, the grounded and the captured, the challenge for our machine learner is to create a new movement repertoire across these two differing data spaces. To facilitate this, we adapt the low-dimensional (latent) space of the autoencoder, which initially captures only the grounded movements, by training it on sequences drawn from the captured movements. The result of this additional training is a latent space where both data spaces are superposed and ‘mingle’ according to their similarities and differences, establishing niches and gaps. As the simulated robot learns to imitate the PBM costume’s movements, the goal is for the robot to learn the constraints that produce the movement qualities and subtleties, which emerged from entangling dancer and robot costume.

Finally, in the improvisation phase, the robot learns to adapt previously learned patterns of movement to invent new movements. The hybrid, third data space, becomes the robot’s learning ground for this phase, where the ‘illumination algorithm’ creates new movement repertoires by learning to fill in gaps. Our PBM methodology, including these three learning phases, thus allows us to reimagine robotic agents as skillful performers, capable of moving in uniquely machinelike ways while participating in creative human–robot meaning-making. Results from user studies involving audiences and experts vouch for the efficacy of this generative approach to produce robotic movement skills that are grounded in the physical embodiment of the robot while being embedded and informed by the social and cultural context of our interdisciplinary collaboration (Gemeinboeck and Saunders 2019).

5 Conclusion

The term ‘creative AI’ potentially covers a broad spectrum of activities from the use of AI as a medium of expression for human creativity to the modeling of creativity in computational systems. The computational modeling of creativity has seen remarkable advances but, like studies of human creativity, has typically focused on the generation of novel ideas, rather than the role of embodiment in creative activity. Our practice sits along the spectrum of creative AI; we attempt to use models from computational creativity to produce robotic artworks. Developing robots requires us to deal with the ‘messiness’ of the real world, highlighting for us the situatedness and embodiment of creativity, and the performative nature of the creative act. Consequently, our creative practice has shifted our understanding of creativity away from an algorithmic perspective and toward the development of skillful machine performers in the enactment of creativity. This conception of creativity as a form of intra-action has guided our current program of arts-led research. Designing from this performative viewpoint, where agency and meaning are no longer pre-defined, requires us to position ourselves in the middle of the encounter as part of the design process and attend to the ongoing dynamics, agencies, and meanings as they emerge.