1 Introduction

Money and finance have not been left unscathed by digital technologies. In many ways, the digital ecology of quick and vast transactions suggests that the nature of money as we understand it may be changing. Electronic payment methods and the tendency to abandon cash are often cited as the most significant signposts of the digital economy (Rolington, 2016), followed by discussions on cryptocurrencies as a form of money for the future (Frisby, 2014). The social consequences of a fully digitalized financial world are also often brought up in debates about the future of finance and money (Lovink et al., 2015).

That being said, it is one thing to claim that new ways of working with money and finance arise in the digital universe, and another to claim that algorithms, networks, and hashes perform new ways of engendering money and organizing finance. The difference between these claims becomes particularly important if, as Martin (2011) states, money is a “social technology,” i.e., an arrangement by which a given group, society, or nation organizes the way in which its values are represented, affirmed, and distributed. If the first claim is a matter of technical innovation, the second is a matter of technicity, i.e., it concerns the way in which the technicity of digital networks themselves determine the generation and recognition of monetary and financial value, as well as the meaning these values carry within the collective realm.

By claiming that digital technicity reorganizes the technicity of money itself and, with it, of finance, one raises the question of how this technicity is put in place. It is consequently a question of knowledge (who has the ability to put the technical apparatus in place) and politics (who controls the process? Who decides what the parameters and rules are?). This is also true of any other domain that is reconfigured when introduced in the digital networks—from mass communication to business—a reconfiguration that is commonly expressed as “disruption.”Footnote 1

The question at hand concerns the place money and finance occupy in this reconfiguration. Does money become more “democratic” when it is generated by proof-of-work and verified by a peer-to-peer network, leaving aside banks and governments? Do trading algorithms make markets more stable by improving their liquidity? Are the propositions of new forms of money (complementary currencies or cryptocurrencies) radically different from what we have so far understood as money? If so, does this alter the power relations implied by the technicity of money? As for finance, are there forms of trading arising that bypass the structures of control put in place by Wall Street?

Since the implications of broaching money and finance through the question of digital technicity are manifold, I have chosen to examine two of its most advanced phenomena in this article. One is bitcoin, particularly the technology behind it, the blockchain (also used in other digital initiatives, mostly involving the notion of “smart contracts”).Footnote 2 The other is high-frequency trading, which has rapidly gained importance over the last decade in Wall Street and other financial hubs. These phenomena lie on the two “extremities,” one could say, of the subject. One of them aims at perfecting financial trading by exponentially raising its liquidity. The other seeks to create an autonomous, inviolable form of money. Understanding the changes introduced by their operations in more detail may elucidate the examination of other initiatives that populate the world of digital money and finance.

High-frequency trading (HFT) designates a form of algorithmic trading; algorithms are widely used in financial markets to automate certain operations according to a program preset which is determined by a given trading company or trader. HFT in particular is characterized by extremely fast computing, the reduction of latency (the interval between a command and its realization, i.e., response times), the ability to place gargantuan amounts of orders in infinitesimal intervals—either as a way to increase trading and liquidity itself, or simply to discover a current state of the market—and high message rates, i.e., amounts of orders, quotes and cancelations (Shorter & Miller, 2014, p. 6). HFT algorithms were involved in events such as the 2010 “flash crash” (when the market suddenly crashed, only to recover within the hour) and the practice of “spoofing,” which consists of market manipulation through ordering and canceling in extremely short periods of time.Footnote 3 Over the last 6 years, the practices of HFT traders have been investigated repeatedly by the Securities and Exchange Commission (SEC), which is still looking for ways to regulate them. The Bitcoin protocol, presented in 2008 in a paper signed Satoshi Nakamoto (which is probably a pseudonym), was announced as a solution to the double-spending problem that haunted digital currencies up until then: how is it possible to make sure that a particular unit of digital information (in this case, a “virtual coin”) is unique, i.e., cannot be replicated and spent multiple times? The Blockchain is intended to solve this problem with a very sophisticated system of peer-to-peer (P2P) verification.

The theoretical framework employed in this analysis is that of Vilém Flusser’s 1980s writings. In his later writings, the Czech-Brazilian philosopher sought to understand the emerging media forms. Beginning with photography, and later moving to early digital systems, he developed the notion of a technical image, which offers deep and cunning insights into how contemporary communication technologies transcend the mere ability to represent the world into being able to act in it.

It is no coincidence that the period in which Flusser and other media theorists identified an “image-flood” (Bilderflut) (Flusser, 2005) witnessed the upswing in globalized, financial capitalism. Flusser was instigated to write his trilogy on media in the late 1970s and early 1980s, during the early days of what would be an accelerated expansion of both cable television and debentures. Within a few decades, global communication and the global economy would become significantly different: more immediate, more fleeting, generating ever-greater figures. Why is it not a coincidence? Because the technological framework associated with institutional reforms played an important role in both cases. Information technologies such as cables satellites, computers, optic fiber, etc., allowed for an unprecedented development in global financial innovations and in the transmission of news, publicity, film, and all other kinds of images.

What one learns from Flusser’s theories on media is that there is more to the “image-flood” than a mere acceleration: there is also a relevant shift in how the images come to be and the roles they fulfill (Theis, 2011). Digital technologies, with respect to images and particularly to finance and money, produce a “double layer” of perception. On one layer, the most complex and detailed part of the representation is accessible only to the apparatus at its inception, while on the other, the human viewer receives a surface that, to a great extent, does not differ from what she is already used to experiencing with traditional images or texts. Flusser’s argument, as we shall see, resonates with the question of technicity raised above, concerning digital finance and digitalized money. Moreover, this article argues that one crucial aspect of money, and by extension of financial operations, is that it contains a quality of image, which is equally visible in the so-called commodity theory of money and its rival, the credit theory. This is the reason why Flusser’s concepts help illuminate the transformation of the technicity of money taking place in the digital world.

The first section deals with Vilém Flusser’s concept of a technical image, particularly its most relevant features regarding the problem of digital finance. The discussion concerns the technical image’s link to text, code and “traditional” images; its relation to the social role of symbol manipulation; as well as the modes of consciousness (magical, historical, and post-historical) and experience of time (circular, linear, and non-dimensional) existent in any given society. These are the central features of technical images that will be analyzed in relation to the role of apparatuses, as well as the utopia and dystopia Flusser sketches for post-history. The second section presents an overview of theories about the nature and operation of money and finance. Particular importance is attached to the links between different ways of understanding money and the ways money is used, especially with respect to the linearity and circularity of temporality. The third section is dedicated to high-frequency trading and how the algorithmic operations on which it is based can be interpreted in light of the first two sections. This section also discusses how one could determine whether this digital finance phenomenon points towards the utopia or the dystopia suggested in Flusser’s texts. The fourth section concerns bitcoin and the technology on which it is based, namely the blockchain. It analyzes the emergence of virtual coins and their process of validation in light of Flusser’s theory of the technical image, as well as the social and political imports of cryptocurrencies using Flusser’s scenarios of utopia and dystopia as backdrop. The conclusion links the problems raised in the two last sections to the general question of money and finance, as it is discussed in the second section.

2 Technical Images

While studying the trends of twentieth Century communication and culture, Vilém Flusser coined the notion of the technical image, in books such as Towards a Philosophy of Photography (1983), Into the Universe of Technical Images (1985) and Does Writing Have a Future (1987). The philosopher’s efforts are aimed at considering the possibilities for liberty and lack thereof in the kind of social world (cf. Zielinski, 2006) that was emerging through the Bilderflut. His argument sheds light on the relation between media and consciousness—how one determines the other—and the subsequent distinction between cultures that deal primarily with “traditional images,” cultures centered on writing, and cultures that will be centered on the technical image. Between the first and the second, and then between the second and the third, “two fundamental turning points” (Flusser, 2000, p. 7) take place. They affect the way humanity sees its world, engages with it, and acts inside it. These turning points are more than matters of registering or representing, they are a matter of human “reality”—the quotation marks are Flusser’s (2000, p. 10), that is, of “what we run up against on our journey towards death; hence: what we are interested in”, as the philosopher writes in the glossary at the end of Towards a Philosophy of Photography (Flusser, 2000, p. 82). This is because the world “is not immediately accessible to human beings” (Flusser, 2000, p. 9), so humans need mediations to comprehend it. Images are such mediations.

The image, both traditional and technical, is a matter of encoding and decoding. To encode is to abstract into surfaces what one comes up against in space and time, and to decode is to read these surfaces and “project them back into space and time” (Flusser, 2000, p. 8). Encoding and decoding are the two aspects of that which Flusser denominates imagination, and images are defined as “significant surfaces.” The main distinction between image and text is the way one grasps their meaning, their symbolic import. The image-surface is scanned by the viewer, who relates her own intentions to those of the image, and can gaze in different ways, can direct her attention back and forth, thus establishing a circular form of time, from whence significance arises. Text is read in a linear way, so that the temporality implied by its own form is itself linear.

This is why the “time and space peculiar to the image is (...) the world of magic, (...) in which everything is repeated and (...) participates in a significant context” (Flusser, 2000, p. 9). The time and space peculiar to text, on the other hand, is that of history, “in which nothing is repeated and (...) everything will have causes and (...) consequences” (Flusser, 2000). There is a reason why the first turning point takes place: images tend to be “more real” (in the sense of what interests humans) than the reality they are supposed to mediate, so that “[t]hey are supposed to be maps but they turn into screens: instead of representing the world, they obscure it until human beings’ lives finally become a function of the images they create” (Flusser, 2000, p. 10). The image becomes the goal, rather than the means, and the world itself, when grasped by humans, turns out to be an image. Unsatisfied with this situation in which “imagination has turned into hallucination” (Flusser, 2000), which Flusser names “idolatry,” “some people” attempt to reorganize the flow of signification of the images. They substitute lines for surfaces, thus giving birth to historical consciousness, in opposition to magical consciousness. Together with historical consciousness, conceptual thought comes about, as a modality correlative to imagination.

Though there is an opposition between text and image, as the former is expected to recover the clarity and symbolic functioning of the latter, Flusser makes clear that the relationship between them is more complex, even dialectical. If texts are expected to explain images, they ultimately exist as a function of the image. They are “a metacode of images” (Flusser, 2000, p. 11), a code being “a sign system arranged in a regular pattern” (Flusser, 2000, p. 82). Images, on the other hand, illuminate texts and make them easier to understand. This dialectic is present throughout history. Moreover, if there is an “idolatry” of images, there is also a “textolatry” in which texts come between humanity and its world, the world becomes a function of the texts, and the text has more reality (as a possibility of engaging human experience) than the images it should explain and the world these images should clarify. When texts become incomprehensible, a new technology must emerge to reinstate the symbolic mediation between humans and the world, just as texts had done when images had ceased to perform appropriately. This is the origin of “technical images.”

Through the notion of the technical image, which operates the second “turning point,” Flusser attempts to imagine “the prospect of a future society that synthesizes electronic images” (Flusser, 2011a, b, p. 3), a post-historical society which overcomes the reliance on the linearity of texts in favor of the synchronic technical images. There is a transformation in the form of consciousness at every step in this progression as the temporality implied by the media that are used changes. As a result, the phenomenal relation to the world, i.e., reality, changes as well. At a certain level, there is a gain in organization and information, but at another level there is a loss when the medium is put before the relation it should mediate, generating new forms of noise, such as idolatry and textolatry. The prospects and risks surrounding technical images function in the same manner.

The first signs of the emergence of a society based on technical images are to be found of course in photography, as Walter Benjamin had noticed in writing his “Little History of Photography” (Benjamin, 2000b). The advent of this medium opens the possibility to produce images in a new way by means of an apparatus that prints light on a surface made of chemical elements, rather than through a series of gestures. Photography marks the first instance in which the linear stream of writing is blocked, thus reading the chain of events in a non-linear fashion. After photography, many other apparatuses, gadgets and instruments come about, amplifying the way in which information can be transmitted in ways other than linear writing.

From the standpoint of this study, the interest of the technical image stems mainly from three of its properties, according to Flusser’s description. Firstly, the technical image is generated by informing (i.e., applying a form to) random particles; it is an organizing act that takes the dimensionless realm of the particles and the gaps in between, and orders them in surfaces. This is how meaning emerges. According to Flusser, “technical images rely on texts from which they have come and, in fact, are not surfaces but mosaics assembled from particles. They are therefore not prehistoric, two-dimensional structures but rather post-historical, without dimension” (Flusser, 2011a, b,, p. 6). Technical images are mediated by and dependent on code, which, for Flusser, has a textual structure. Thus, even as people communicate through images, there is an underlying stratum of coded text that remains invisible to them. This is a different relation to that between text and traditional image as described by Flusser: one explaining, the other illuminating. Here, the code predetermines the process of taking form but does not usually appear to the viewer. Often an image produced through the apparatus and the code is quite indistinguishable from a traditional image, especially for a viewer without knowledge of the code or the apparatus.

The apparatuses act at the level of random particles because human intelligence and ingenuity have reached so far. Flusser evokes mathematicians like Leibniz and Newton, who in the XVII century developed differential and integral calculus in order to seize infinitesimals. The kind of problems (or their structure) human societies face at this point are of this level, which is the level of the particles of the universe. In Towards a Philosophy of Photography, Flusser pointed out that every informational activity of the human mind is an act against the tendency of all matter towards heat death, i.e., entropy. Into The Universe of Technical Images goes a step further, with an argument borrowed from contemporary physics: the state of a field of particles is a question of probability. The most probable states, tending to the inevitable, are not informative as they present even distributions; the improbable states are the ones that inform and, therefore, are the most interesting for the human observer as they will be the ones she will seek to produce in the midst of the chaos of particles and repetitions. The distinction has a wide impact, changing the way one perceives the world, understands it, and most importantly, acts in it: “[f]rom now on, concepts such as “true” and “false” refer only to unattainable horizons, bringing a revolution not only in the field of epistemology but also in those of ontology, ethics, and esthetics” (Flusser, 2011a, b,, p. 17).

Though the technical image results from organizing the randomness of the particles (e.g., the analogical photograph that assembled chemical particles or the digital camera that registers bytes by coding light), the ease with which it is produced and reproduced (cf. Benjamin 2000a) will quickly turn it, from the standpoint of the receiver, into the same repetitive and entropic reality it should revert. If for traditional images the risk is idolatry and for texts it is textolatry, technical images can also reverse their function of generating information. Once they become excessive, they can be a source of entropy, in the sense of more probable states, banality, which is merely impressive and numbing. This is, of course, the dystopian side of Flusser’s forecast.

Secondly, technical images result from the activity of technical objects and systems, the apparatuses. An important difference between technical images and their forerunners, the “traditional” images, is that the latter are produced “by depicting,” as in the platonic theme of mimesis. The technical image emerges in turn from the computation of particles. “Traditional” images are an early form of mediation between human consciousness and the world, or between the hand and things, in sum, the emerging subject and its objects. Taking as an example the Lascaux paintings to claim that any human society has members responsible for symbol manipulation, Flusser stresses the symbolic and collective character of the image: “Symbols that are linked to content in this way are called codes and can be deciphered by initiates. To be inter-subjective (to be decoded by others), each image must rest on a code known to a community (initiates), which is the reason images are called ‘traditional’” (Flusser, 2011a, b,, p. 12).

On the other hand, technical images arise, according to Flusser, in order “to consolidate particles around us and in our consciousness on surfaces to block up the intervals between them in an attempt to make elements such as photons or electrons, on one hand, and bits of information, on the other hand, into images” (Flusser, 2011a, b, p. 16). But the bodily gestures of depicting or acting that performs the work on objects can no longer be called upon, as “these elements are neither graspable, nor are they visible” (Flusser, 2011a, b). This is why the apparatus is needed: “to an apparatus, particles are no more than a field of possible ways in which to function,” and “it transforms the effects of photons on molecules of silver nitrate into photographs in just the same way: blindly. And that is what a technical image is: a blindly realized possibility, something invisible that has blindly become visible” (Flusser, 2011a, b,, p. 16). The apparatus seeks relations of meaning where there might originally be none, until effectively a certain meaning comes about and is identified as an image.

Flusser point out that any non-human agency is an apparatus, from machines to institutions, from computers to the market. Even as an object (a camera), an apparatus is a cultural one, produced out of the objects of nature in order to serve a human goal. An object becomes meaningful when it is grasped by the human, detached from its natural context and given its own human form. This is when it can in fact be called an object, as the human constitutes itself as subject. And like images, an object can also be informed, to the extent that images can be printed on objects (such as money). The apparatus itself, manipulable object or overarching institution, contains code (or “scientific text,” cf. Flusser, 2000, p. 14), from which the technical image is derived, so that the relation between the object, the text and the image remains present throughout the process, even if one considers that the final product is only the image (or another informed object). Technical images correspond to an effort that is intrinsically distinct from that of seizing or understanding objects, and ordering a world composed of such objects. The question here concerns the infinitesimal, random, relational. Rather than interpreting or grasping objects, one produces them, generates them from subatomic matter, i.e., energy, and from the stream of thought. It is not a matter of thought producing its own objects: instead of a post-Kantian or phenomenological problem of transcendental esthetic, one faces a problem of Deleuzian difference-in-itself and agency: how particle elements come together to form sense.

Thirdly, the generation of technical images also implies the existence of a human operator, who decides over the meaningfulness of the images that actually emerge from the particles; Flusser compares photographers with hunters stalking their prey, with the difference that the hunter is looking for food in the savage and perilous savannas, whereas the photographer seeks to generate sense in a “jungle of cultural objects” (Flusser, 2000, p. 33) that can obscure her vision, demanding attention and an active choice. Also, the action of the operator is limited by the apparatus’ own program, the apparatus being an agency in which scientific texts are at work. “The apparatus does as the photographer desires, but the photographer can only desire what the apparatus can do” (Flusser, 2011a, b,, p. 20), as Flusser adverts. This does not mean that an active and creative relation to the apparatus, and by extension to the image, is impossible. Once again, there is a dialectical relation between the apparatus and the operator, that is, between the object that operates through code and the gestures and intentions that must deal with these pre-programmed possibilities while also generating sense and improbability. Flusser compares automatic photographs made by satellites with images produced by the click of a photographer, but a sounding example is given by the German filmmaker Harun Farocki in “Images of the World and the Inscription of War” (1988): the first allied images of Auschwitz in 1944 were unintentional, as the meaningful element, for the intelligence staff who actually saw the images, was an IG Farben factory. Through a single programmed process of emergence of the technical image, two different forms of meaningfulness have emerged, according to two different intentions: Auschwitz was only “seen” in these images in the 1970s. In the absence of any of these intentions or of the ability to read aerial photographs, the same image would be redundant, not very informative.

This is where the figure of the “envisioner” is introduced. The envisioner is a symbol manipulator, the same as there are in any human society—poets, soothsayers, and financial advisers. But it is someone who understands the powers and limitations of the apparatus, escaping and therefore managing its entropic tendencies. She does not depict the world, she informs it, meaning that she does not represent objects or adjust concepts to things, she drives the process by deciding where the image will surface, i.e., how the forms will be effectively applied. Envisioners counteract the tendency towards redundancy present in automation, such as in the satellite photographs or the overwhelming production of irrelevant images in the Bilderflut. In the digital world, one can identify in the envisioner a certain curatorial role, in the sense that she is able to extract meaning from the cacophony of messages that stream incessantly. In a world of big data, where sensors become ubiquitous in cities, satellites, machines, buildings etc., someone must be able to identify and choose where the flow should be interrupted in order to produce information. This is the role of the envisioner.

The interruption of the flow occurs via a particular part of the apparatus, which operates the intermediation between humans and the world. It is the key. For Flusser, in the universe of technical images, humans act by pressing keys, such as the button of a camera, the light switch, the button that ignites nuclear weapons or the typewriter’s keyboard, the latter being Flusser’s example. Given the system/apparatus, the key has the power to stretch human perception and action out towards the infinitely big and the infinitely small, bridging “the famous sandwich according to which the world is made up of three layers, one with atomic, one with human, and one with astronomical dimensions” (Flusser, 2011a, b,, p. 24). Switching on a light bridges the gap between the universe of electrons and a room occupied by humans, in a human scale. Transmitting events from afar (from Japan to Brazil, in Flusser’s example) in “real-time” reconfigures human geography (the enormous distance in the globe) by drafting ionic waves. The envisioner decides where the information actually is and activates the key; in possession of the key, the envisioner has the power to decide how the world will appear to people as they try to act in it: the envisioner shapes reality, in the sense of “what is interesting” as we live.

Having described the technical images and the apparatuses that surround them, Flusser depicts the post-historical world he foresees, a world that after the two-dimensional (traditional) and the one-dimensional (textual), becomes non-dimensional as it delves into the chaos of particles, in the physical universe and in the mind. If in the (magical) universe of traditional images time moves in a circular fashion and in the (historical) universe of text time moves linearly, one might ask what happens to time in the universe of technical images, in the era of the Bilderflut. As it goes, time itself must be deflated of its dimensionality, so that Flusser’s theses can help to illuminate a ubiquitous notion in contemporary media: that of “real time,” which underscores the synchronous character of the age of technical images. Originally used to refer to an event’s actual length of time, with communication technologies “real time” has come to mean two things: firstly, that the receiver is witnessing an event practically while it is unfolding; secondly, that an electronic apparatus is reacting almost instantaneously to input, producing responses as if there were no interval. In social media and other forms of digital interaction, the communication is said to be “in real time.” News is also said to come in real time and chat rooms make people keep conversations “in real time.”

Ultimately, this notion expresses the proportion at which one can contract an interval through technical means, without eliminating it completely. If taken literally, “real time” would imply the suppression of temporality itself, but what we learn from Flusser is that the generation of technical images in this infinitesimal manner consists in bridging a gap between levels of the universe, the infinitely small or the infinitely big, and translating them to a human scale. This is done both on the sender’s and the receiver’s sides, and they interact through an apparatus that uses electric impulses, ionic waves and other particles to code and decode messages. The interaction occurs as if the participants were facing each other, which is a metaphor actually present in terms such as the chat room or the discussion forum. Indeed the temporality implied in the notion of real time designates an interval too short to be immediately perceived. The notion of “real time” creates the illusion of the elimination of the time-span, which is implicit in the very spatial notion of distance. But this is a form of connection to which it is easy to relate, that offers perceptions and understandings that owe little to any other form, and that are perfectly capable of orienting human action. It determines a perfectly effective folding of reality as it is experienced. Many digital technologies explore this process in depth, as we shall see concerning digital finance.

Given this non-dimensional character of technical media, Flusser foresees two opposing possibilities for the post-historical society of technical images:

One [trend] moves toward a centrally programmed, totalitarian society of image receivers and image administrators, the other toward a dialogic, telematic society of image producers and image collectors. From our standpoint, both these social structures are fantastic, even though the first presents a somewhat negative, the second a positive, utopia. (p. 4)

The risk of totalitarianism that Flusser mentions lies mainly in the fact that the images are not static and the receivers are not passive. There is a relation of intentions among the producers and administrators of images on one side, and receivers on the other. Images are sent with the intention to incite receivers to react, be it by arousing enthusiasm (as in sports broadcasting) or by motivating an impulse to buy (or desiring to buy) a certain marketed product. This relation can be perfected, the reactions can be analyzed and the generation of images can be adjusted so as to produce a more certain result. “Give the public what they want,” an advertisement executive might say. But if the results are more probable, by definition the images are less informative. The apparatus’ potential for negative entropy is reversed; it turns to concocting redundancy and entropy.Footnote 4

The keys that operate this process could surely function differently, as they can potentially both receive and send. This was once the case with radios and is still partially the case in the Internet and the social networks, though the companies that run these networks are constantly developing more powerful algorithms that control the process of interaction (Eslami et al., 2016). For Flusser, until humanity manages to develop the means to turn the technical images into an effective form of dialog, these images will isolate people and reduce them to a category of receivers. Flusser criticizes Marshall McLuhan’s notion of a global village because, in his view, the public sphere and the private homes necessary for such a village are lost in the universe of technical images (as it stands) (Flusser, 2011a, b, p. 30): images lie at the center of our reality, irradiating their own logic, so that the overflow of images and the ease with which they are made produce a feedback loop by which their production adapts to the wishes of the public, but only inasmuch as the public adapts back to the images.

In the bleak version of the dystopia, together with the receivers who enter into a redundant relation to images, Flusser foresees a world in which producers of technical images remain in their cubicles pressing keys, isolated from one another and subsisting inside this loop. A contemporary illustration of the implications of his ideas is to be found in a fictional work of cinema released 2 years prior to Flusser’s writings of 1983. In War Games, a teenager played by Matthew Broderick unwittingly risks starting a world war as he breaks into a US military computer system while playing at his home computer. Whatever commands are actually being given, whichever impact his playing can have in real-world events (e.g., launching physically existing missiles), the actual decision-makers (the teenager and the military personnel) only interact with technical images and operate through keys. What seemed dystopic in 1981 becomes an actual war strategy in the following decades, with guided missiles in both Iraq wars and the development of an immense drone warfare program, in which the operators of events taking place on the ground are sitting far away in cubicles, acting through screens and keys.Footnote 5 This exemplifies a case of the dystopian perspective on technical images: how does one know what actually happens as they operate? How do we grasp their relation to a world they are supposed to either represent or model, if the code remains unseen? The financial cases below illustrate precisely these problems.

3 An Overview of the “Money Problem”

Definitions of money vary greatly, to the degree that Schumpeter (apud Hart, 1986, p. 643) writes that “views on money are as difficult to describe as shifting clouds.” In general lines, approaches to money grasp it as either a special kind of commodity (a general equivalent) (cf. Menger, 2009), an institution of (mostly State) power (cf. Knapp, 1924), or a sign for other values, as expressed in the notion of price-signal, present with different meanings in the works of Thorstein Veblen (1921) and Friedrich Hayek (1948). More generally, money has been envisaged as a sign throughout the history of thought, from the Salamanca School to Jean Bodin (cf. Foucault, 1966). In both the Grundrisse and Capital, Marx discusses money’s character as a sign (Nelson, 2005), criticizing the theories that remain limited to this aspect. According to Hart (1986), all these analyses fall short of the fact that money performs all these roles. The coin has two sides, Hart says, and they are both indispensable: it has heads, usually of a sovereign or a symbol of the political power sustaining its value; and it has tails, usually representing the more mundane reality of everyday transactions (Hart, 1986, p. 645).

One might add that the complete coin, with heads and tails, is greater than the sum of its part, constituting a “third side.” Defining money as merely a commodity exaggerates its functions in commerce and the financing thereof; defining it through the power that issues it exaggerates its capacity to perform a social unity, including the structuring of a financial system by means of credit; and stressing only its character as a sign for other values leaves aside its power to determine the way in which these values themselves are made significant and visible in a given social configuration. But it is not only a question of scholarly definition. As Graeber (2011) demonstrates, following Hart’s suggestions, different economic configurations require different proportions of each of these aspects of money. Landed aristocratic empires tend to be associated with commodity money, which is more stable and durable. Based in debt and credit, token money has the opposite quality of being malleable and swiftly canceled. This is why commercial nations give it preeminence. It allows for greater bets, risk taking, and financial operations. The definition one gives of money expresses what one expects from it: either the foundation of credit on top of commerce (in the commodity theory), or rendering commerce possible by organizing the way it is financed (in the credit theory).

These arguments contain two important points. Firstly, if money is to perform properly in finance, commerce, and taxation, then it must fulfill two apparently (but only apparently) conflicting conditions. On the one hand, it needs to be incarnated in an object or stable image (e.g., gold or national currencies). On the other hand, it must be malleable and abstract, so as to allow for credit, debt, and investment.Footnote 6 The point, which is at stake in Hart’s argument, is that the relation between these two aspects is inseparable from money itself, as it obtains its power from this dialectical tension much like the tension between image and text described by Flusser as we have seen in the last section: they are opposed, but one completes the other, as there is no commerce without a previous financing, and there is no finance without commerce in sight. The second point is that there are myriad configurations in which any monetary form can function within these two borders. Thus, depending on how the institutional framework of a given society is set in place, different monetary configurations can function, from the gold standard to so-called “fiat money,” from the subsistence in the Middle Ages of Roman denominations of “imaginary money” (cf. Graeber, 2011, p. 37) to the “stone money of Yap” that fascinated both Keynes and Friedman (cf. Martin, 2011).

There is nevertheless a correlation between the way money is configured in a given social reality and the kind of consciousness one can associate to it, as in Flusser’s distinction: magical (circular) or historical (linear). The way money is represented through images and introduced into the logic of linear temporality is at the core of its power, while its power is at the core of its ability to function as money, i.e., to effectively be money. Money can just as well be engraved on particular objects (inscribed and projected on its surface) as registered in ledgers (inscribed according to balances of operations). Marcel Mauss was the first author to stress the former aspect of money, arguing combatively for the importance of recognizing this feature, as can be read in his 1914 presentation Origins of the Notion of Money, to such a point as to define money as something nearly every other anthropologist, sociologist or economist would see merely as its “primitive form” (as in his “Note of Principle Concerning the Use of The Notion of Money,” in Mauss, 2007, p. 105).

Indeed, Marcel Mauss’ crucial essay on The Gift (originally published in 1925) offers insight into how the values embedded and incarnated in money can act detached from it, reproduced in the consciousness of people who interact removed from the presence of any image of money or any object that counts as money. One of the gift ceremonies the French ethnographer examines most extensively is the Kula ring in Melanesia, as a commentary to Malinowski’s Argonauts of the Western Pacific, published in 1922. The Kula, apex of a vast economic structure of gift-giving, is the moment in which leaders compete for prestige by exchanging gifts. These gifts, which for Mauss carry a numeric measurement value and are at the origin of our conception of money (Mauss, 2007, p. 105), are symbols of each clan’s strengths: images of power and prestige imprinted on sacred objects. Parallel to the Kula, ordinary tribesmen perform a different kind of interaction, the Gimwali, which consists basically in barter. It is significant that such trade occurs in the shadow of the Kula, to the extent that the leader who tries to establish an equivalence in the Kula is scorned for “doing the Kula as if it were a Gimwali” (Mauss, 2007, p. 101).

What Mauss is showing is that the Kula, the greater ceremony, involving the powerful (chiefs and priests), stabilizes and affirms values for the entire ethnic group through an economy of sacred objects. These objects, by means of their own power (cf. Mauss, 2007, p. 53), the mana (a notion Mauss generalizes), represents iconically the powers of the lords; these values, represented on the surface of the objects, spill over to the common people who perform the Gimwali. Without the Kula, there would be no Gimwali; no barter, no trade of equivalences in the absence of the great exchange of gifts and values. The commoners trading in the Gimwali act under the sign of values established, affirmed and stabilized elsewhere. The values are already constituted for them and they can identify their figure right away. The Kula is, as Mauss names it, a total social fact (fait social total), as it involves all aspects of the collective’s life world: law, religion, economy, power. For the collective, meaning takes place with the ceremony. This is where Mauss identifies the origin of the notion of money (cf. Mauss, 1914), the power conveyed by, through, and in the representation, as presented on the surface of an object. But the most relevant aspect of Mauss’ argument is that the operation of these forms of money travels far beyond the presence of the object on which it is printed (or incarnated). As a representation, these values are present in the minds of those who perform the barter (Gimwali) as well. These overarching values are reaffirmed yearly, in a circle of ceremonies and exchanges justified by the magical aspect of the sacred gifts.

The second feature works differently. Credit theories of money, such as Knapp’s (1924) chartalism or Randall Wray’s (2012) Modern Money Theory, stress the role of political power in determining the value of money. At this stage, what is at stake is the question of registration of property, in the name of fiscal policy. This is money that can be written down in books, and to all effects any object that eventually represents its value (such as gold) can be eternally kept in vaults, while the only values that actually circulate are scriptural. In the age of the gold standard, for instance, most gold reserves were kept in the Federal Reserve of New York (cf. Bernstein, 2000); during the Bretton Woods period, the American reserves were kept in Fort Knox (Bernstein, 2000). In this case, what makes money foundational for commerce and finance, regardless of which is derived from the other, is that through it, the stability, outreach and fixity of values is affirmed and acquires a form of its own. Much of the technicity surrounding money, from banking rules to financial instruments, relates to the management of value (risk management is an example) in time and between territories (exchange rates are an example). Debts, particularly public debt, can be accumulated indefinitely, just as private credit can inflate indefinitely; the notion of economic growth belongs to such an era, as well as financial instruments that use conceptual, scientific images (mathematic formulas) to calculate return and risk. This is also the period in which money is conceived as a “promise to pay”, implying a payment done in the future. Conversely, a debt expresses a loan in the past. One produces a commodity to be sold in the future and works according to a contract signed in the past. A currency area (cf. Mundell, 1961), which most often coincides with a political unit, implies the validity of all monetary values across the territory, as they share a form; and a mechanism designed to define the relation between these values and those of another territory.

This form of money, added to the symbolic function it exerts in Mauss’ example, incorporates a linear relation to temporality and the circular affirmation of values of the objects present in the Kula ring. In this sense, there is a history of money and the “idolatry of money” in this context is less the extreme belief in the sacred values that could take place in other monetary forms (which is perhaps why some leaders might “perform the Kula as if it were a Gimwali”) than an excessive belief in the accumulation of wealth, and of money as a sign of this wealth. Meanwhile, the dialectical and tense relation between the textual character of scriptural money and its traditional form as a symbol, usually but not necessarily inscribed in an object (as in the relation between the Kula and the Gimwali) is present as well, leading governments to come in and out of the gold standard during crises and wars,Footnote 7 while ordinary people eventually save money in the form of cash by fear of banks going bankrupt.

One particular financial notion deserves to be examined in more detail, in order to understand the role of (technical) management of time in the operation of monetary relations. It counts among the most basic ideas in the terminology of finance: present value. What makes this notion interesting for this analysis is that it is accompanied by a performing act, which is that of bringing to present value. Operating through a mathematical formula (a conceptual image), it has a textual logic and deals with time in a linear way, but ultimately produces a figure that is a symbol of a whole process. How does one calculate the actual worth of a bond or any other financial asset? (And one must also note that the term actual carries its own form of ambiguity, meaning both true or real, as opposed to false or imaginary, and current, existing, as opposed to virtual.) The worth of the bond in act is determined by its present value, discounting the interest rates, the risk attached to the asset, and any other variables that might potentially affect its future performance.

Here, is an example of a very basic formula to calculate the present value of a bond:

$$ PV=FV\times 1/{\left(1+r\right)}^n $$

In which PV stands for present value, FV stands for future value (often the face value of an asset), r stands for the rate of return (an interest, for example), and n represents the number of periods. The act of bringing to present value generates a figure of what is expected to occur to an asset’s price in the future, by means of its worth, as it is visible and readable by all potential traders. A $1000 bond with an interest rate of 2%, due for 5 years (60 months), has a present value of $304.78. Depreciation and risk can also be brought into the calculation of present value. Ultimately, one can calculate one’s own present value by estimating how much one is supposed to earn given one’s educational level and the state of the market (Schultz, 1961).

The stable, visible, malleable figure of the asset with its present value makes it a meaningful concept in the market, i.e., confers sense to it in a specific social circle. The figure in present value arrests a flow of rhythms of payment and renewal of interest, which extends into the future under the form of a promise or a debt. Therefore, the actual figure that brings about the information of a present value derives its existence from virtual relations that are merely expressed and presupposed by the asset itself. This is a feature of technicity, as the figure of present value is an image with which it is possible to work and that is possible to trade. Decisions can be made based on it, because a vast and virtual universe of the expected, desired and promised is condensed into this price (present value). To a certain extent, this is the same operation performed by the circular monetary figures of the Kula ring, imprinted on the sacred objects and flowing as representations towards the entire community. Yet what is condensed in the figure of the price (as present value) is the linear flow of temporality to which the asset will be subjected in the future.

The actual decisions that are up to the traders are made according to this condensed figure, but the buying and selling operations act upon the whole composition, including the virtual (the expected, desired, promised). In short, this means that the operation by which one “brings to present value” consists in generating a formalized and managed figure of the future—an expected, virtual future, for sure, but still a future—which is possible to fathom and deal with. The example of this simple financial notion is useful because both HFT and blockchain-based cryptocurrencies operate to a great extent by managing temporality. Albeit in a different sense, they too engender a “present value,” and deal with promises and belief. But as technical phenomena, this temporality is not the same linear one, and it is not, once again, the circular temporality of “magical consciousness.” If these phenomena are indeed technical images, one must be able to identify their non-dimensional (or post-historical, as Flusser would have it) relation to temporality. The point is not to ask how HFT or bitcoin operate in time, but rather how they operate with time, and how this marks their character as technical images.

4 First Technical Image: HFT

HFT is a particular form of algorithmic trading, in which algorithms simulate human traders, but are able to perform millions of operations per second. Most accounts of HFT (cf. Arnuk & Saluzzi, 2012; Scott 2013) focus either on how it amplifies liquidity and perfects price discovery, or on the risks it may entail, such as the events known as flash crashes (cf. Ortega Barrales 2012; Lewis 2014). One document that does both is a report from the British Government Office for Science entitled “The Future of Computer Trading in Financial Markets,” published in 2012. The report begins with a very efficient summary of the transformations brought about by computer trading:

The volume of financial products traded through computer automated trading taking place at high speed and with little human involvement has increased dramatically in the past few years. For example, today, over one third of United Kingdom equity trading volume is generated through high frequency automated computer trading while in the US this figure is closer to three quarters (p. 2)

According to Lhabitant and Gregoriou (2015), HFT strategies “typically consists in moving in and out of extremely short-term positions to capture fractions of pennies in profit on every trade, with a view that extremely large volumes of transactions can compensate the low margin per trade.” While there is no official definition of HFT (Shorter & Miller, 2014, p. 10; Lhabitant and Gregoriou, 2015, p. 158), there are certain traits that indicate the activity of high-frequency traders (known as quants or algos, which is also used for automated trading in general). The most relevant for the purpose of this article are: “the use of extraordinarily high-speed order submission/cancellation/modification systems with speeds in excess of 5 milliseconds” (Shorter & Miller, 2014; Lhabitant and Gregoriou, 2015), “very short time-frames for establishing and liquidating positions” (Shorter & Miller, 2014; Lhabitant and Gregoriou, 2015), “the submission of numerous orders that are cancelled immediately or within milliseconds after submission” (Shorter & Miller, 2014; Lhabitant and Gregoriou, 2015).Footnote 8 In other words, an HFT algorithm explores the limits of latency in the market, i.e., the interval between bids and executions. It also shifts its own positions in intervals as short as possible (which generates a race towards installing the computers as near as possible to the exchange server, known as co-location), which allows the algorithm to place innumerable bids but cancel them as soon as an answer is received, i.e., before a transaction takes place. Often the algorithm will “ping” small orders in order to detect the other orders present in the market, and the speed with which this is done nearly guarantees that the owner of the algorithm will always get the best prices.

These features make algorithmic traders efficient as “market makers” (where “to make” has the sense of “to render possible”). By analyzing supply and demand for a certain bid, the algorithm can adjust the price accordingly, thus perfecting the mechanism known as “price discovery.” The high cancelation rates associated to HFT algorithms (Susai & Yoshida, 2015, p. 234) can be attributed to many factors, from mere adjustment of prices in distinct marketplaces to actually illegal activities, such as spoofing or front running.Footnote 9 This is one of the reasons why HFT has become controversial: it is unclear whether the enormous volume of bids and cancelations increases precision in price discovery, if it actually consists in strategies to push prices up or down, or if it simply favors faster machines that attain better prices as they appear. But the speed and the intensity of these algorithms are also interesting for other reasons: the computers generate the figure of a price by means of a mathematical formula acting on a very concentrated interval of time over an enormous amount of data. The price is informed through the operation involving an apparatus, a code and a programmer.

Rather than actually making the market, as in rendering it possible, the algorithms shape the market by ordering the singular manifestations of supply and demand. Each order consists in a particle of codified desire (as they represent what a particular participant in the market wishes to buy or sell). What gives prices their reality in this sense is that they confer order to the gaps between particles of desire, which involve their own calculations: agents conceive their ideas of what the value of an asset is and then decide whether or not they want to buy or sell. From the beginning, these bits of data representing someone’s desire are coded, simulated and fit into the mathematical model with which the algorithm operates.

The algorithm that performs this fitting is produced by the knowledge of someone who is potentially an envisioner in Flusser’s sense and acts by pressing keys. The shaping of the market occurs when an enormous amount of bidding teaches the algorithm how the data, the particles of desire (supply and demand), are distributed, allowing the computer to decide how to act upon this overarching representation. It is a representation of the market at a certain instant, a nearly immediate cartography of the market: a real-time map of supply and demand. Yet one must notice that the trader herself cannot see the map that results from the cartography operated by the algorithm. Only the machine itself has direct access to this level of organized data (this information), which changes much quicker than human perception is able to grasp: it changes in nanoseconds, in “real time.” Price, which for Veblen (1921) and Hayek (1948) provides a signal, in this case does much more: it translates the algorithms’ activities into the language of money, which can be understood by the flesh-and-bone agents in the market. It is the relay between the energy spent by the algorithm as it generates its map and the decision the trader may wish to make. At the first layer, the map itself, the surface of the technical image is accessible only to the computer. The human trader can only access the second layer: its resulting prices. These prices flash before a human eye, and their evolution in time can be plotted onto linear graphs that function in the same way as traditional images. The graphs are not rigorously the same as the cartography. They subsist, once again, in a dialectical relation involving the cartography, the code, and the price.

At this level, regardless of how the algorithm is used (for price discovery or spoofing), it could be argued that it provides an incremented information about the state of the market, without indicating any particular tendency towards the future nor incorporating any events from the past. The linear flow of historical time is present, to be sure, in the adjustments made to the algorithm, as both the program itself and the programmers learn how to better predict other agents’ behavior. This is profoundly different from the basic formula for calculating present value, where the figure obtained through the equation expresses the behavior of an asset over time. As long as the trader retains the power to decide the point at which prices are worth any given transaction using the cartography performed by the algorithm, she is indeed informed and performs an active role.

Nevertheless, most of the traders that interact with algorithms are algorithms themselves (Jorion, 2012). Each cartographical instance of the state of the market consists in a form obtained by the intersection of automatic, mathematical calculation, producing innumerable price-figures per second. Automation, for Flusser (2011a, b, p. 19), tends to lead to redundancy and a loss of information, i.e., an increase in entropy. If the images produced by the apparatuses cease to reveal the improbable, they become an overflow, a cacophony, an entropic feedback between the apparatus (with the code contained in it) and the receivers, by which one adjusts to the other until meaninglessness is achieved once again. This kind of effect is visible when the price-discovery process begins to reveal itself as a price-creation process, within a context in which interacting algorithms shape the market. This happens because the shape of the market ceases to reflect events occurring in the economy and begin to reflect basically the behavior of interacting algorithms. In other words, the determining factor is no longer the knowledge of the value of an asset, but the capability to most accurately represent the latent desires and choices of all the other traders as they appear plotted on the map, even if these traders are machines as well. As far as the reality of the market is concerned, the intersection of maps takes over from the world as the source of data, just like in other contexts images or texts can take over from the world as the experienced source of reality. Ultimately, the majority of human traders act mediated by algorithms; in fact, the speed in which these machines act tends to render the human traderFootnote 10 null.

In this sense, the programmers (or “algos”), which had the potential to choose the most informative moments in the process in order to determine prices that spill over to the economy as a whole (which is ultimately what the idea of price discovery was supposed to express), are found to be submerged by operations that occur, entropically, beyond their command, adjusting to each others in a redundant manner. They still see prices emerging and still read the graphs that intend to represent these processes in a linear way, showing the rise and fall of prices, the bullish or bearish states of the market at a certain point. But the actual interactions, generating forms that are less and less informative, more and more automatic and redundant, occur between the algorithms, modeled according to expected market behaviors and adjusted to each other. Originally intended to assess bits and particles of desire (supply and demand), what they end up assessing is the configuration of the relations between the programmed algorithms themselves. Meanwhile, the operator attempts to produce a glitch of effective information here and there, by hitting keys and changing lines of code.

Within the context of contemporary, automated finance, these divergent tendencies (informative discovery of relations between supply and demand vs. entropic automation of algorithmic interactions) expresses an aspect of the risks pertaining to the universe of technical images. When programmers and traders (and these two roles are often performed by the same individuals) can decide about prices and shape more efficient and liquid markets, they establish a dialog among themselves, i.e., there is a “dialogic” character to their market making. But when algorithms generate an intensive and energetically costly flow of prices that express no more than the ability of the mathematical models to predict each other’s behavior, then the operator is reduced to a passive role as receiver, numbed in her consciousness of how her reality takes shape.

These tendencies help to illuminate a fascinating, though frightening, phenomenon related to HFT. Indeed, discussions about HFT (see Lewis, 2014) tend to focus on the flash crash, most particularly the first one, which occurred on 6th May 2010. In that occasion, it only took a few minutes for the Dow Jones index to drop 9%, making US$ 862 billion evaporate. Less than one hour later, the market was reestablished as if nothing had happened.Footnote 11 More than a problem concerning the “risks of automation,” what can be inferred from the flash crash is how occasional errors of the algorithm reveal the passivity to which the operators are actually reduced, when their own layer of access to the informative process (the list of prices) fails to be generated. The trader Dave Lauer, in his statements for the documentary “The Wall Street Code,” gives a striking example. Lauer was then a programmer of HFT algorithms. In the documentary, he recalls his experience of the flash crash: in a fraction of a second, he says, the market simply disappeared before his eyes. In his words:

As we looked at the screen, the orders started drifting, the orders were being cancelled. And then they started drifting more. And then they started to go off the screen. And then they were gone. There was nothing. There was no market. (...) We’re just sitting there and staring into oblivion. We had no idea what was about to happen. I was thinking: something terrible just happened. Something indescribably horrible just happened. The market was gone.Footnote 12

Lauer believes the market “is gone” because he cannot see the prices, which are his access to the operations that perform the market. In the pipeline that goes from the singular desires to buy and sell (supply and demand) through the mediation of the algorithm, to the forming of the map, arriving at the prices that appear to the trader’s eyes, the mediation is dysfunctional and therefore the price is inexistent; for the human receiver, it is reality itself (the reality of the market) that vanished. Yet he is right to point out that the market is gone in the sense that the particles that do get organized into relations that make sense in such a market are no longer actual demands for selling and buying, but simulated mediations created by computer programs, according to models conceived by other programmers. This is, one could say, a typical disaster of the dystopian version of the universe of technical images.Footnote 13

5 Bitcoin and the Blockchain

Cryptocurrencies, of which the most famous is quite uncontroversially bitcoin, may be said to lie at the other extremity of the digital finance showcase. Contrary to HFT, these currencies are not a product of mainstream finance, and often developed by volunteer programmers. Furthermore, they aspire to be even more disruptive than HFT, which sees itself only as a “natural” evolution in trading, liquidity and price discovery. Bitcoin, in turn, is often referred to as “the money of the future” (see Frisby, 2014).Footnote 14 While the definition of bitcoin as money (as opposed to yet another speculative asset) is disputed (see Bjerg, 2016), it is nonetheless accepted as a means of payment in several establishments and is often used as a store of valueFootnote 15 (see Kubát, 2015). While the volatility of the cryptocurrency’s price has been seen as a demonstration of its inherent flaws, its utopian character regards these ups and downs as a phase of growth and consolidation, typical of the birth of a currency that will be free from both Central Banks (i.e., the State) and big financial actors (Bolici & Della Rosa, 2015).

If one examines how bitcoins emerge and function, namely the operation of this and other cryptocurrencies, it seems to be much more similar to HFT than one might expect. Bitcoin is founded on the blockchain technology, as described in the famous and elusive Nakamoto (2008) paper, and more recently, also seen as the basis for the future of contracts in general, such as in the aforementioned Ethereum project. Firstly, in the case of HFT we have identified the presence of two layers of information, one of them only accessible to the program (the map of the market), the other accessible to the operator (the price or list of prices). As we shall see, there are also two layers of information in the case of bitcoin. Secondly, the manipulation of temporality is also identifiable, as a crucial aspect of the form-giving capabilities of the ledger. Yet, the way the blockchain deals with temporality is different from what we have described in the case of HFT. Finally, both HFT and the blockchain manage to create form by organizing a multitude of particles into an image, as they establish a particular form of relation by which a certain meaning emerges. In the case of bitcoin, a feature that is not central to HFT plays a decisive role: cryptography. More than a mere strategy to avoid interceptions, cryptography plays here a constitutive role by decomposing transactions into bits of coded, seemingly random data, and then recomposing them by decoding the data back as a block of information (for a detailed account of the role of cryptography in bitcoin, see Dupont, 2014).

In short, the blockchain that sustains currencies like bitcoin is a digital and public ledger, accessible to any machine connected to the Internet. In this ledger, past transfers of small portions of information are transferred in blocks, and these blocks are encrypted with the aid of an algorithm (the hash).Footnote 16 The blocks are put into sequence so as to keep a registry of the entire history of transactions, thereby forming chains of blocks, that is, blockchains. The entire network shares this database. Each terminal (each user) possesses a “private key” that decodes the hash (a “public key”), which had circulated in cryptographic form throughout the network. The advantage of the SHA-256 hash algorithm is that it produces strings of code bearing no visible relation to the original message, so that two messages with a difference of a single digit (“fox” or “fix,” for example) will render completely different blocks. Meanwhile, each new coin emerges from a process known as “mining,” which takes place when a computer manages to solve a very complex mathematical problem (a “proof-of-work” system). To date, the computational power needed to generate a new bitcoin by far surpasses that of any personal computer (Bhaskar & Lee, 2015; Eyal, 2015).

The blockchain is thus used for the bitcoin system in such a way as to allow for the verification, by the entirety of a network, of any given information. According to the developers, it is impossible, or nearly impossible, to introduce new information into the network in a unilateral manner, as the rest of the network would not have verified this information and it would not be validated. This is why technologies like Ethereum attribute to the blockchain the possibility to create a system for registering just about anything, from patents to last wills, by relying on cryptography to obtain distribution and decentralization. When it comes to cryptographic currencies, this implies that the verification becomes the guarantor of whatever value the currency has. When bitcoin was launched in 2009, the crucial issue was to beat the problem of double payment, by means of the registry of all past transactions and the verification of the transactions by the network, or to be more precise, by the machines working on the network. In other words, it was a question of stating value through an immense, distributed, replicated presence. The same goes for the mining process, which, by generating coins as a response to the accumulation of mathematical operations, demands great loads of energy and must be verified by the network.

In sum, the Bitcoin protocol employs a code, a network of computers, a cryptographic algorithm and a myriad of stored information, each hashed block containing the singular registration of a past transaction. Each coin is singular because the transactions it has gone through are not lost over time but crystallized into it. The coin, when looked upon as a blockchain, grows over time. The recognition of a coin’s value takes place almost instantly (in real time), as the terminals recognize the blocks, which denotes a non-dimensional operation. This is performed by the algorithm, which decodes the encrypted random messages generated by the same algorithm on another terminal. But the linearity associated to historical consciousness is also present, as the blocks represent past operations, even if crystallized into present units of data. There is an opposition between, on the one hand, the speed in which the verification of a blockchain (with its layered pieces of data) can take place throughout the network, and on the other hand, the extension of the series of operations the chain can contain. This opposition is indispensable for bitcoin to function as a statement of value; the dialectical relation between the inscription of linear time and the effectiveness of near-immediacy confers to the coin its appearance of being a unit, a singular figure.

As we have seen, the network attests the value of a coin through algorithms. The accumulated but organized sediments of past operations are the material on which this operation is performed. Much like the HFT algorithms, the bitcoin terminals have access to the whole system under the form of an algorithmic map. The terminals “perceive” this map of the network, but the users do not, as all they see in their virtual wallets is the registration of a certain amount of coins or fragments thereof (and this information can also only be accessed by using an algorithmic key, as the bitcoin network is cryptographic). It is possible to draw a diagram of how the network functions, rendering “noticeable” to the bare human eye what in effect only the computers themselves perceive, and only the algorithms can decode, i.e., “read.” But this is only an illustration, not the network itself, much like the charts an algorithmic trader sees are not the market itself, but the inscription and attempted linearization of the operations taking place in it.

The cartography with which the network operates consists of bringing the indeterminate dimension of past events into human scale under the form of possessing coins—in the present. This is how value is recognized in the coins. Cryptography is determinant in this process because it dissolves the singular properties of all past events and inserts them into the network of indistinguishable bits of data. As Dupont (2014) states, while the role of cryptography is firstly to produce privacy, it does so within a social relationship, allowing for a means to control this relationship. Moreover, Dupont argues that cryptography has been, since its inception, a “discrete notational system,” because it is an algorithmic representation, which “permits the transposition and combination of its symbolic elements, but only when made of disjoint, articulate, and unambiguous marks” (Dupont, 2014). Furthermore, “because algorithmic representation permits the rearrangement of its symbolic structure it can be used to order the world, in subtle but powerful ways” (Dupont, 2014). Rearrangement and reordering are at the core of the cryptography that sustains bitcoin in its operation, as Dupont argues. The algorithm penetrates the history of transactions in order to extract a myriad of events, to organize them as singular but discrete signs, and to structure them according to its own protocols. Afterwards, when this process is reversed, in the sense of decoding, it produces a different meaning: a unit of value, a coin.

Through the cryptographic algorithm, the network’s map keeps expanding as people keep transacting with their coins and new ones are mined. But the coin’s format as it appears in the digital wallet never changes. A relation is established between the linearity of past transactions and the non-dimensionality of a coin’s current value. Once again, while the terminals “see” the expansion of the map (the first layer of technical production of images), the user only sees the coin in the digital wallet (the second layer). It is, so to speak, a value of catalog, the same kind of data registration that can be observed in other instances of digital interaction technologies, the most evident of all being arguably Facebook’s algorithm: memory as apprehensible value. The difference is that social network algorithms employ their users’ accumulated actions to calculate future behavior, direct personalized advertisement and retransmit these blocks of lived temporality to commercial partners. Bitcoin, in turn, verifies the provenance of each chain as it appears in its coded, cryptographic form, in order to attest to the value of a digital object that, to the eyes of the user, remains the same.

As in the case of HFT, it is a technical innovation that operates to a great extent with the dialectical relation between the linear temporality registered in its codes (each block linked to the next in the form of a chain) and the real-time affirmation of the figure that reaches the user (the coin). While HFT produces a map of the market in an interval of nanoseconds (in real time), the bitcoin protocol produces a present image of sediment, of past performances, past acts, transfigured into lasting blocks of code. Once again, one can resort to Vilém Flusser in order to understand how this process involves technical images; organizing particles into a given modality of relation, by means of a code. Indeed, each transaction is originally a singular event, destined to be forgotten, just like all sales and purchases done in any market over the years. There is no necessary connection between a transaction at one point in time and place on earth, and another one at a completely different point. The only connection is that they are part of the same networks of relations—for example, exchange of dollars and yens by great banking firms in the global financial market are accounted for in national balance of payments and eventually get taxed (if the taxes are not bypassed). Traditional money is said to be indistinguishable, by which it is meant that any accounting registration, or any bill or coin, is interchangeable (except when stolen money is searched for according to its serial number). In other words, there is hardly any individuality in a monetary unit. This is not the case with bitcoin or a fraction thereof. A particular sequence of code, when registered in someone’s electronic wallet, translates an indeterminate series of past events as blocks of encrypted code. More importantly, it emerges as valid, perceptible and relatable (i.e., as an element of reality) from the decoding of this translated past into a new form. From the viewpoint of the network, at the first layer, read by the algorithm, the chain is a registry in relation to other registries, as in the structure of a linguistic or kinship system. From the viewpoint of the user, it is a coin.

Therefore, even though the “coin” is a codified history, made present for purposes of exchange, the owner of the coin cannot actually see this; she can only trust it is there. This is its value. The machine that accesses the system and the code is the only party capable of reading, identifying and assuring the validity of the whole process. In the Bitcoin protocol, the blockchain organizes the particles of past and unrelated transaction into coded form, thus obtaining a technical image of one unit of currency. The images all machines connected to the network can read (the blockchain) are translated into the coins appearing in the user’s wallet, and that is what she can see. There are two distinct moments: one in which the machine “looks up” the blocks and compares them with information obtained in the network, and the other in which the user sees the value in her wallet. This double layer of images is similar to what we have seen in HFT. But it is a different way to deal with temporality. The owner recognizes value not because she sees it emerge, but because the screen tells her the process has occurred.

Having described the algorithm’s operations in creating a coin with the blockchain protocol, it is possible to raise the question of how this whole system fares in respect to the prospects and risks Flusser associates with the universe of technical images. As far as cryptocurrencies are concerned, how can we determine whether we are on our way towards a dialogic society of envisioners, organizing flows of chaotic data into information, or towards an automated, totalitarian society of receivers overwhelmed by a redundant flooding of coded images? Much of the controversy surrounding bitcoin, as “money of the future” or “speculative tool perfect for scams,” can be rephrased as versions of this question. At first glance, the Bitcoin protocol seems to be indeed quite informative as it produces discrete units of meaning from a multitude of disconnected past events, provided they are codified and reorganized by the algorithm. Contrary to HFT, the algorithms used for bitcoin do not adapt to the users’ behavior in order to produce more probability and redundancy. For Dupont (2014), the question hinges upon the determination of how one orders the world, i.e., the possibilities of acting and perceiving reality. As Flusser (2011a, b) writes, every society has people specialized in symbol manipulation, and Dupont is stressing the political aspect of working with symbols once they become the central access to reality, the determining factor for what is possible or not.

Kostakis and Bauwens (2014) also place bitcoin at the crossroads of how future societies will be organized, either as autonomous, self-organizing communities of people who freely manipulate the codes and apparatuses or as a system dominated by a techno-aristocracy where only a few speculators can control the system and hoard the coins. The authors place the development of bitcoin within the recent evolution of what is known as “cognitive capitalism,” as a stage in which capital accumulation is obtained through the apprehension of affect, desire, thought, and memory. The risk concerning digital initiatives such as bitcoin is the creation of a “neo-feudalism” (Kostakis & Bauwens, 2014, p. 35), where peer-to-peer technologies are stripped from their most creative features and reduced to the reproduction of a logic of accumulation. This happens once the community-based creation of currencies, involving active transformations of the algorithm and the collective deliberation to choose the criteria for valuation, loses ground against the comfort of automated, passive usage, as has been witnessed in the case of social networks. Bauwens and Kostakis see the proliferation of other digital currencies as a sign that communities are actively taking possession of the blockchain technology as applied to money, in a way that values and underscores the open access character of much of the code employed.

Yet, the individual success of bitcoin and the uses it has been put to (accumulating reserves for speculation; dodging regulation and surveillance in underground networks) appear to the authors as a sign that the system is tending towards the concentration of power and the submission to a techno-aristocratic (“netarchical,” in their words) form of control. In this case, the myriad of bits of information lose their capacity to inform by establishing relations, and become only the raw material over which pre-existent relations of domination are reaffirmed. This is why Lovink and Tkacz (2015) are skeptical about the same proliferation of currencies that Bauwens and Kostakis view positively. For them, it is a sign of the deepening of a “neoliberal subjectivity” in which one is in “permanent start-up mode” (Lovink and Tkacz 2015), reducing the range of possible modes of action to the single profit-seeking logic of personal entrepreneurship. In this case, the behavior of individuals and collectivities becomes more predictable, more probable, giving way to automation, which renders the reiteration of the same gestures and behaviors swifter and more efficient.

6 Closing Remarks

It is no coincidence that both high-frequency trading and bitcoin have become so controversial. They indeed represent the most fascinating and advanced applications of contemporary communication and information technologies to the world of commerce and finance. Finance and economics already pervade people’s lives, as themes such as economic growth, public debt, unemployment rates, productivity, etc. define much of our personal and public policy decision-making. If these and other initiatives from the digital world can transform the way values are represented, shared and transacted, they can disrupt deeply entrenched structures of the contemporary world.

We have seen that the evolution of money can be described in relation to the evolution of technologies dedicated to symbol manipulation. Furthermore, the organization and structure of symbols affects our perception, consciousness and usage of money. Orthodox monetary theories still argue that regardless of how it is technologically constructed, money always operates in the same way, only varying in the degree to which it will perform more or less smoothly in commercial transactions or as a saving (or hoarding) device. Nonetheless, if ceremonial money, incarnated both in physical objects and in detachable images, can be associated to the gift societies, as studied by Mauss, and scriptural money can be associated to the evolution of pre-industrial and industrial economies through the modern period and into the contemporary situation of state monies not pegged to gold or any other “substance,” then the operational reality of money must necessarily also change when the mode of symbol manipulation changes, as is the case in the digital realm.

Bitcoin is an attempt to change money itself, its substance, its generation. Algorithmic trading, most notably HFT, is an attempt to change its circulation and its operation. The question I have attempted to raise concerns not only the ontological or ontogenetic problem of what these financial and monetary phenomena are and how they come about. What is at stake is above all the ethical and social problems of what the technical and intellectual effort of producing them is mobilized for. Is HFT meant to accelerate the race between trading companies, and ultimately favor the biggest financial corporations, which can dispose of more money to pay for developers and a better room in the stock exchange? Is bitcoin meant to replace the money used by the financial sector, but keep business running, all in all, as usual?

The fact that both instances of digital financial innovation generate doubt about the kind of markets and, by extension, of societies that are being built, is significant in that this indeterminacy calls upon an accrued attention. By choosing the way in which people will interact with these algorithms and the systems in which they are embedded, humanity will choose the role both the people and the apparatuses will play in the emergent society of accelerated communication and real-time information. It is, of course, a matter of “finding new weapons,” as Dupont (2014) says, following Deleuze’s 1990 text on the societies of control. But then it must also be seen as a matter of imagination (Flusser, 2000, p. 83), i.e., of finding the ways in which human creativity can be translated into images (imagined) in a powerful way, rather than only in an imitative one, a hallucination.