Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

An ant, viewed as a behaving system, is quite simple. The apparent complexity of its behavior over time is largely a reflection of the complexity of the environment in which it finds itself. (Simon 1969)

1 Introduction

Simon's famous ant metaphor points to the possibility of two alternative representations for the same complex phenomenon: the ant's convoluted path on the beach may be described as complex behaviour against a simple background, or as simple behaviour against a complex background (or as a little of both, of course). The metaphor also supports the intuition that complexity is largely in the eye of the beholder – a fruitful philosophical position to take, as it encourages the observer to seek the representation that is the most useful for the purpose at hand rather than engage in a wild goose chase for “the” correct kind of representation. However, the ant-on-the-beach scenario falls short in one important respect: it views phenomena as consisting of a system of interest and an environment, whereas in fact every system description also involves a (usually tacit) underlying spatio-temporal framework.

I propose the notion of polyplexity as a new way of approaching the study of the most complex of systems, that is the systems studied in the social and policy sciences. Polyplexity goes one step further than most conventional approaches to complex systems by taking into account the possibility that the space and time within which a phenomenon enfolds may themselves be complex. It proposes a “divide and conquer” modelling strategy based on apportioning the apparent complexity of a phenomenon among the three major constituent parts of any system representation: the system of interest itself, its environment, and its spatio-temporal context. Polyplexity suggests that the widely acknowledged greater complexity of social relative to natural science phenomena may be seen to be due in part to more complex underlying space–time frameworks. Should this be the case, accounting for spatio-temporal complexity in addition to system environment complexity in social science modelling may help simplify the representation of certain systems of interest.

Social scientists embraced the complexity paradigm fairly early on, making major contributions of their own along the way. However, despite increasingly sophisticated models of complex socio-spatial dynamics and agent-based systems, social science has adopted more or less unquestionably the Cartesian framework of the natural sciences. The result is in many ways a more elaborate form of “social physics”, with models such as those simulating the emergent behaviour of growing sand piles replacing the planetary “gravity model” metaphors of the 1950s and 1960s. On the whole, the space and time of social science remain monotonously flat. The shortcomings of the current homogeneous, isotropic space–time assumptions may be especially evident in the attempts of geographers and others to model information-age phenomena such as the “death of distance” or the “extensible individual”. It is conceivable that these taken-for-granted Cartesian assumptions are hampering progress in a much broader spectrum of social science and policy research. After several decades of achievements in complex system modelling, I believe that the field is mature enough to consider exploring approaches more specifically tailored to the challenges of the “difficult” (as opposed to “hard”) sciences. One possible direction would be to focus on notions of social space and time and their potential role in simplifying the representation of complex social phenomena. This emphasis seems to makes sense because, as Nigel Thrift notes, “complexity theory is preternaturally spatial” (cited in O'Sullivan 2004, p. 284). Polyplexity is meant to be an early wobbly step in that direction.

Not surprisingly, complexity is itself a complex notion. There are several different complexity paradigms highlighting its different aspects: discontinuous change under smooth parameter variation, self-organization, emergence, path dependence, feedback, deterministic unpredictability, and so on. These include Thom's (1975) catastrophe theory, Prigogine's (1980) bifurcation theory, Haken's (1983) synergetics, chaos theory, and a host of related computational approaches among which agent-based simulation and cellular automata modelling are especially popular in the Anglo-American world. Less well explored outside its field of origin is one of the oldest complexity paradigms, that deriving from Turing's work on the mathematical theory of computation (see Copeland 2004). Through its two major branches of automata theory and formal language theory, the theory of computation contributes the notion that complex representations can be built gradually from simpler ones through the systematic expansion of the domains of the operands and operators considered. Polyplexity hopes to capitalize on this principle though the details are still nebulous.

Going back to the issue of a complexity science for the social and policy sciences, there are a number of desiderata, most of which are not very well served by more traditional approaches to complexity. For example, it would be really nice if we were able to handle the following kinds of problems with something like the power and elegance possible for the description of complex physical processes:

  • The description of social processes and events, which involve reasons (telic considerations) as well as causes

  • The representation within the same general framework of multiple perspectives on – and interpretations of – the same social process or event

  • The modelling of emerging institutional structures that are not simply the result of bottom-up interactions

  • The representation of individual decision and choice in highly complex environments

  • The support of decision making in planning and policy under deep uncertainty and conflict

  • Etc. (add your own wish list here)

An overarching desideratum would be the development of a unified perspective on complex system modelling in the social and policy sciences for handling and integrating the above kinds of issues.

As an agenda for polyplexity, this sounds extravagant to the point of foolishness – but who knows? The time may be right for confronting tentative, high-risk ideas of this kind, such as the notion that polyplexity could perhaps simplify the representations of social phenomena and policy problems of interest by relegating some of their apparent complexity to suitably complex but still manageable spatio-temporal structures. To be useful though these structures should first be integrated within some more general and systematic framework. For example, Simon's ant-on-the-beach metaphor could be generalized to the “principle of consistently optimizing behaviour”, stating that “every choice is an optimal choice when examined against the appropriate background of empirical, logical and spatio-temporal assumptions”.

In the following pages I discuss the three main components of the notion of polyplexity. Complex time and complex space are examined in the next section, and then the notion of “prior structure” is presented as a perspective on modelling that might conceivably support the philosophical ambitions of polyplexity. The conclusion, which is by necessity sober and brief, mentions some of the challenges of pursuing such a program, and summarizes numerous open questions that this chapter leaves in its wake.

2 Complex Time, Complex Space

This is not the right place to review the achievements of complex systems research in social science. Several of the field's protagonists are represented in this volume and can speak for themselves. The breadth of the social scientists' contributions to the complex systems paradigm has indeed been quite extraordinary, covering both discrete and continuous systems, both the macro- and the micro-perspective, both statistical and process modelling, both analysis and policy-oriented synthesis, and both conceptual and applied research. Building on that wealth of previous efforts, this chapter attempts to glimpse fuzzy visions of the future rather than retread the brilliant past.

2.1 Complex Time

In a book entitled “The economics of time and ignorance”, O'Driscoll and Rizzo (1985) examine the nature of prediction in economics and conclude that under no circumstances can prediction be complete because of the existence of “real” time and “real” ignorance. The authors contrast “real” time with Newtonian time which is simply a framework for ordering events, a reference line against which events can be mapped as either points or intervals. A basic property of time-as-framework is that it does not in itself affect events. In other words, Newtonian time does not bring change; it only serves to register change as it happens. Time is fully analogous to (Newtonian, absolute) space, and has the same three basic properties: homogeneity (all time-points are the same except for their position along the time line); continuous divisibility (implying that neighbouring time points are independent of one another); and causal inertness (time is independent of its contents: in itself it causes nothing). In any model based on Newtonian time, even a fully dynamic one, it is the present as we know it that is sent rolling along the time line. As the great economist F. H. Hahn observed, in such models “the future is merely the unfolding of a tapestry that exists now”. Footnote 1

“Real” time by contrast is characterized by the properties of dynamic continuity, heterogeneity, and causal efficacy. These properties preclude prediction, hence the notion of “real” ignorance. Dynamic continuity is based of the two aspects of memory and expectation. The meaning of each moment depends on its place in the context of what we remember of the past and expect for the future, just as in the experience of music each note can only be appreciated relative to those heard a moment before and those anticipated yet to come. More generally, the timing of an event changes its nature to the extent that the unique context of other events within which it occurs affects its role in the determination of subsequent events. This is the case, for example, with economic agents whose response to events today depends on what they learned yesterday (which includes the responses of other agents to yesterday's events), as well as on what they expect to happen tomorrow (which includes how they expect other agents will act). The property of heterogeneity of real time follows from dynamic continuity in that no two instants can be the same, each one relating to a different set of preceding and succeeding moments and their remembered or anticipated contents. This makes events in real time genuinely non-repeatable. Thus non-repeatability emerges from an event's temporal “place value” – its order in the flow of events. Causal efficacy is a further corollary of the above in that dynamic, heterogeneous time is causing actions and events to be different now from what they would have been under the same conditions some time earlier or later. Related examples also from economics are the notions of time inconsistency and of discounting, whereby the utility of a given option (say, of buying insurance or of the government raising interest rates) can vary greatly depending on the time when the choice must be made. In general, the nature of the uncertainty that this conception of time implies is much more profound than the two kinds commonly considered in science: the case where the value of a specific outcome is unknown but the ex ante probability distribution of outcomes is known, and the case where the underlying probability distribution itself is not known (random). Footnote 2 Here we are dealing with situations where not just the probabilities, but even some qualitative characteristics of outcomes – all the way to the very nature of the possible outcomes themselves – cannot be determined ex ante because they are not part of “a tapestry that exists now”, in Hahn's famous words quoted above. Under the name of “deep uncertainty” this latter notion is prominent in the work of a group of researchers from the RAND corporation advocating a general approach to planning that takes into account the virtual impossibility of prediction (Lempert et al. 2004).

Real time is much closer to the psychological intuition of a dynamic flow of ever-changing experiences than to the traditional scientific view of a directed axis used as a ruler for pegging events. Its significance is obvious for social science problems involving intentional agents. However what this conception of time addresses is not just human cognition and action but more generally historicity, or the claim that the nature of any phenomenon depends to some extent on its place within a process of historical development. A good example from natural science would be the significance of a particular mutation in an organism, which may or may not have an evolutionary value depending on the timing (and placing) of its appearance. The fact that it is impossible to predict future speciation in biology is further evidence that the processes of evolution work in real time. The similarity of real time with the notion of path-dependence in complexity theory is surely not coincidental.

Historians have their own complex models of historical time. According to a group of historians involved in a major digital atlas project, modelling time as a fourth dimension downgrades it into being only a facet of space whereas, in fact, time operates according to very particular principles. Footnote 3 This is because at the core of historical understanding is the event rather than the object or the point process, and historical events are not described as discrete entities but as networks of other events linked together by causal and telic relations at different levels of granularity. To support such a view multiple lines of real time may be needed, and these would be punctuated rather than continuous since knowledge of events is episodic and fragmentary. This historical view of complex time provides the complementary macro-perspective to O’Driskoll and Rizzo's mostly individual-level real time, and in doing so it transposes it to a level that is at least an order of magnitude more complex. Both these approaches reject the simple one-dimensional view used in practically all complex systems research in favour of conceptualizations that emphasize intimate causal and telic linkages among time, events, choices, and their ever-changing contexts.

The need to broaden the notion of time has also been keenly felt within the hard-nosed, empiricist geographic information science community. Several models have been proposed in the context of “temporal GIS” beyond linear time: cyclic time, branching time, totally- and partially-ordered time, valid and transaction time, clock- vs. event-driven time, etc. Each of these brings some useful modification to the simple axis of classical physics but the characteristic Newtonian causal inertness remains: time is still the neutral framework against which independently unfolding events are projected, sorted and measured. None of these models (with the possible exception of some interpretations of branching time) approaches the dynamic, causally efficient conception of real time that O'Driscoll and Rizzo believe to be so important in economics and the social sciences in general, and that historians would like to further develop into a highly complex structure.

2.2 Complex Space

From non-Euclidean geometries to relativistic space–time to today's high- dimensional spaces of string theory, physics has been a treasure trove of complex models of space. However, attempts to transfer some of these conceptions to the social science domain have not on the whole been successful. Social scientists have had much better luck with relational and network spaces such as those of graph and network theory or the even more complex multi-dimensional spaces described by Q-analysis, Galois lattices, self-organizing maps (SOM) and other such techniques (see for example Gatrell 1983; Freeman and White 1993; Agarwal and Skupin 2008). The connection of these relational spaces with the space of everyday social life is however somewhat tenuous, since they cannot deal directly with fundamental quantitative properties of physical space such as distance, direction, shape, the elementary Euclidean transformations, or spatial autocorrelation.

Notions of complex space also abound in geography and related disciplines and have often been used to simplify or visually enhance the representation of particular kinds of phenomena. Space transforms are a particularly prominent family of complex spaces, and of these, cartographic projections are the most widely known and used. Other familiar kinds include cartograms, logarithmic spaces, velocity fields, representations of cognitive maps, and parallel coordinate spaces (see for example Angel and Hyman 1976; Borden 1996; Gould and White 1974; Golledge and Stimson 1997; Inselberg and Dimsdale 1994). Some of these are explicitly designed to do what Simon's ant metaphor suggests, that is, they complexify the space so as to simplify the representation of the phenomenon of interest. Footnote 4 For example, representations of cognitive maps produced by eliciting pair-wise distance estimates from subjects are converted through the technique of bi-dimensional regression into heavily distorted, crumpled and stretched transforms of actual maps. These representations may then be used to show how errors in distance perception correlate with sub-optimal spatial choices by individuals or groups. However, all of the complex space representations mentioned here are either formulated for very specific kinds of problems, or they are too general. For polyplexity a middle road would be desirable, whereby classes of social science and policy problems could be handled by the same general approach to complex space.

A couple of my own attempts at setting up models of complex spaces may be relevant to polyplexity. The first of these is the concept of proximal space (Couclelis 1997). Proximal space is formed by the set of all locations that have some functional or other kind of non-explicitly spatial relation with every location of interest at each time. It is a generalization of the notion of neighbourhood as used in cellular automata and other kinds of models, whereby proximity is defined not in terms of physical distance or adjacency but in terms of the special relationship a location has with other locations. For example, the set of all locations of my physical and virtual social contacts form the proximal social space of my home location. Proximal space is thus a network space, but one that is not only rooted in actual geographical space, but also lends itself to simulation modelling: indeed, it supports a formal generalization of cellular automata called geo-algebra (Takeyama and Couclelis 1997). This is one example of how one could simplify the representation of a dynamic process by relegating some of its complexity to the embedding space. It is possible, though this has not yet been explored, that a model analogous to proximal space (“proximal time”) may also be developed for historical time as discussed above. Proximal time would represent the set of key moments and intervals relevant to a specific event of interest and its aftermath – say, the times associated with the genesis and subsequent fate of this chapter, from the original invitation by this volume's coeditors through the fallout resulting from its publication. Proximal time as defined here would thus rejoin O'Driscoll and Rizzo's notion of the heterogeneity of real time, whereby no two instants can be the same because each one relates to a different set of preceding and succeeding moments.

Some earlier work considers not one, but a sequence of interrelated spaces, seeking to capture their distinguishing characteristics in a systematic and reproducible manner (Couclelis and Gale 1986). That project explores the meaning of several more or less vague notions of space used in psychology which include, beyond the Euclidean, spaces referred to as physical, sensorimotor, perceptual, cognitive, and symbolic. In that research we propose a hierarchy of six nested levels corresponding to the above sequence of notions of space and representing, psychologically, a progression of increasingly complex levels of an individual's spatiotemporal awareness. The same empirical experience or phenomenon may be defined against any one (or all) of these spaces, with different implications each time. To represent the linkages between levels the model relies on the notion of selective operators as used in spectral theory, while the first four levels are also differentiated internally by means of the family of algebraic structures that are part of group theory. In this model the operands are spatial ‘atoms’ the empirical interpretations of which vary from level to level (points, locations, positions, vantage points, or places), and the group-theoretic operators are the links between atoms, called “moves” but again meaning different things at each level.

Group theory focuses on operations and transformations, rather than operands, and involves five axioms known as the closure law (G1), the associative law (G2), the existence of an identity element (G3), the existence of inverses (G4), and the commutative law (G5). An algebraic structure conforming to all five axioms (for example the set of integers) is called an abelian group. The other members of the group family are obtained by dropping one or more of these axioms. Thus axioms G1–G4 (but not G5) define a group; axioms G1–G3 (but not G4 and G5) define a monoid; and axioms G1 and G2 (but not G3–G5) define a semi-group. A correspondence between these algebraic structures and the hierarchy of spaces is tentatively set up as shown in Table 6.1, based on certain empirical properties of each space in the sequence. Thus, for example, in the physical space of everyday experience – unlike in pure Euclidean space – the commutative property (G5) does not hold with the force of an axiom because the direction of gravity causes space to be anisotropic in the up/down direction. (Bodies that are “up” can easily go “down” but the reverse is usually not true). Similar considerations result in the elimination of one more, then two more group axioms for sensorimotor and perceptual space, respectively. Thus sensorimotor space, the space in which living organisms (and also robots) move, is like physical space in that it lacks the commutative property, but it also lacks a true inverse (G4) because moves in sensorimotor space can never be completely reversed. Even if an animal or machine returns to the exact same location it started from, its state will no longer be exactly what it was when the move was initiated: it will have become more tired, more hungry, more worn down, or it will have acquired new bodily experiences: it will have “depleted its batteries” or enriched its sensorimotor memories to some extent. One level up, perceptual space is in many ways like sensorimotor space, but lacks the identity element (G3), because even the “stay-as-you-are” move is no guarantee that perceptual identity will be maintained: it is well known that attention filters what we perceive at any particular time. Beyond that level the model breaks down, because it is difficult to give meaning to a space characterized only by the closure law.

Table 6.1 Concepts of space and corresponding algebraic structures

Beside group theory, the other mathematical notion underlying the model is that of selective operators. A selective operator may be thought of as a sieve or filter that sorts the entities corresponding to some particular description out of a universe U of entities. Footnote 5 This method is used in the model to construct the lower four levels out of each other, by selecting out of the universe of group properties first two, then three, then four, then all five group axioms. It is uncertain if the remaining two levels (cognitive and symbolic) really belong in this hierarchy, since they are not subject to the constraints of pure Euclidean or of physical space – though they are most definitely subject to the experience of these spaces.

Regardless of its merit (or lack thereof) as a formalized description of the range of individual awareness of space, two aspects of this model are relevant to the notion of polyplexity. First, at the sensorimotor level we find the first intimations of real time (in the form of the irreversibility of physical effort), and this impression is reinforced at the next level up, though the details cannot be discussed here. Second, it hints at the possibility of developing an ordered sequence of mutually consistent models of space, of varying degrees of complexity, for use in the social and behavioural sciences. This last point is significant because hierarchies of complex social spaces keep being proposed in geography and related fields with insistent regularity. There may be something to that idea that is worth pursuing further.

3 Prior Structure, Determination, and Hierarchical Spatio-Temporal Ontologies

Spatio-temporal ontologies are a hot topic in geographic information science these days. The motivations are mostly practical, such as the need to improve interoperability among different GIS platforms, but some of the questions raised by that work are decidedly theoretical, if not philosophical. Similar though less formalized efforts also originate in geography as researchers attempt to classify and make sense of the unwieldy variety of available conceptual and quantitative models of geographical phenomena. The vast majority of these proposals are hierarchical, involving “tiers” or “levels” or “spaces” of different degrees of complexity and characterized by very different properties. Here are some quick examples, in chronological order: (1) Mathematical space, physical space, socioeconomic space, behavioural space, experiential space (Couclelis 1992); (2) Physical level, functional level, biological level, intentional level, social level (Guarino 1999); (3) Physical reality, observable reality, object world, social reality, cognitive agents (Frank 2003). Or, more specifically regarding the complexity of spatial decision models: (4) Stimulus–response (basic observation), stimulus–response (controlled experiment), rational decision, production system, advanced computational process model (Couclelis 1986). And also: (5) Decision making as a variable, as a probability function without feedback, as a probability function with feedback, by one type of agent, by multiple interacting agent types (Agarwal et al. 2002).

Note that even though all the above examples are spatio-temporal hierarchies, they are not hierarchies of nested spatial and temporal scales, but rather, of semantically different planes on which qualitatively different kinds of spatio-temporal phenomena can be described. These and several other similar efforts all seem to agree that the physical is simple but that the social and mental are complex and hard, but other than that there are few commonalities in approaches and perspectives. The last two examples however – (4) and (5), involving decision making models – do have something interesting in common in that they take an informational rather than an empiricist approach to the issue. The first explicitly, and the second implicitly, they both recognize that the same system of interest may be modelled at different levels of complexity, from elementary to extremely complex, depending on how much information one is able or willing to include in the representation. They thus side with the perspective of mathematical computer science reflected in the hierarchical theory of modelling and simulation by Zeigler et al. (2001), Footnote 6 which is itself based on the hierarchy of automata theory (finite automata, pushdown automata, linear bounded automata, Turing machines) and the corresponding one of formal language theory (regular, context-free, indexed, recursively enumerable languages; see Hopcroft and Ullman 1979).

Somewhat along similar lines is the notion of prior structure in modelling that I briefly explored many years ago (Couclelis 1984). That was part of an attempt to figure out where the predictive power of some simple (and very unrealistic) mathematical urban models comes from. Footnote 7 The idea was that in every complex system there are a number of constraints, formal as well as empirical, that can be known a priori to limit the range of observable system states. Empirical constraints (called historical prior information) derive from certain aspects of the system – physical, biological, technological, institutional or social – that can be more or less reliably assumed to remain reasonably constant within the forecasting horizon of the model. For example: the rate of change in the life expectancy of a population, the rate of transformation of raw materials into structures, or the fact that there will still be fewer commuters on the roads on Sundays than on most weekdays. Such considerations have of course been at the basis of numerical forecasting techniques for many years and are expressed in the distinction between “fast” and “slow” variables in dynamic modelling. The notion of prior structure stresses the importance of being able to specify the level of analysis at which these kinds of empirically derived constraints become operative.

Much more intriguing however is the second class of constraints, called structural (or logical) prior information. This derives from the formal invariances that characterize the fundamental logico-mathematical structures (such as set theory, topology, number theory and logic) that underlie mathematical and computational models. As with the case of historical prior information, the nature and amount of logical prior information available depend on the level of model specification. Together, empirical and logical prior information make up the model's prior structure, that is, the envelope of constraints which incorporates all positive knowledge about the system of interest at a specific level. Within that envelope, all allowable microstates are equiprobable, but Wilson's (1970) entropy maximizing approach can be used to identify the most likely system macrostates. Wilson's seminal statistical–mechanical derivation of spatial interaction (formerly “gravity”) models rescued these from the prevailing crude planetary analogies, while also providing a philosophically significant insight into the value of an informational – as opposed to empiricist – perspective.

And what about polyplexity? Well – complex time and complex space, described in some appropriate, orderly hierarchical sequence, may constitute a third kind of prior information, along with the historical and logical. Polyplexity would take the idea of prior structure in models one step further. This would not suddenly render predictable what is fundamentally unpredictable in complex social systems (the notion of real time alone settles this issue), but it may tighten the envelope of constraints within which the genuinely surprising can happen, while also helping to clarify the limits of modelling in the social and policy sciences.

4 Some Concluding Thoughts

An unspoken word behind much of the preceding discussion – a discussion at times quite dry and technical, is intentionality. Intentionality, along with the human purposes it drives, is why the notion of real time makes immediate intuitive sense, it is what guides the weaving of disparate locations and moments into places and events meaningful to people, and it is what distinguishes cognition and abstract thought from the mechanical sorts of awareness represented in, say, the hierarchical group-theoretic model discussed earlier. Intentionality and the closely associated notions of purposeful action set limits to what we can model in the social world since, qua telic concepts, they are not compatible with current causal scientific paradigms, including the paradigms of complexity. Indeed, social processes and events involve both “because” and “in order to”, and we yet have no tools to deal with the latter. Purpose is a major factor in the evolution, adaptation and learning in social systems, whereas in natural systems that also can evolve, adapt and learn it clearly is not. The role of purpose in the social world is a defining qualitative difference between natural and social complex systems. The more advanced models of artificial cognitive agents are designed to mimic purposeful behaviour; however, to ask where these agents get their purposes from is to promptly end the conversation.

Considering how difficult it is to build reliable models of complex natural systems, what should models of complex social and policy-oriented systems be expected to do?

For years now several researchers have argued for a softer role for models in social science and policy, beyond the traditional triad of description–explanation–prediction. They talk about models as narratives about possible things to come, as plots around which stories of warning or encouragement may be woven. This is not just a nonchalant New Age stance but is informed by multiple evidence that validation of complex system models is not really possible. I sympathize with this view but feel that it goes too far in abdicating all responsibility in trying to anticipate at least some aspects of the future. Polyplexity is meant as an effort to figure out what kinds of things may be known in advance, under what conditions, through what kinds of representational manipulations, and thus perhaps to help restore a modicum of respect in the predictive power of complex social system models.

There are obviously more questions than answers in what I presented here. Does the idea of polyplexity make sense in principle? If yes, could it help simplify the study of the many intractable problems that the social and policy sciences deal with? Could it handle phenomena of the information age that appear to enfold against a hybrid physical/virtual space–time? What may be the role of polyplexity in forecasting and scenario development, especially as used in the policy sciences? What may be, in particular, the contribution of polyplexity to robust adaptive planning as defined by Lempert et al. (2004)? Can we figure out how best to distribute complexity considerations among actor, context, and spatio-temporal background? What are the computational implications of this approach? How may familiar, successful models of complex social science systems be usefully recast in polyplexity terms? Because ideas evolve in real time it is not possible to predict at this point to what extent these speculations about polyplexity may survive scrutiny. But writing this chapter was a complex spatiotemporal event closely linked to a number of other, similar events, all intersecting at the time and place of the meeting out of which this volume was eventually born. Taken together, these intertwined trajectories in time, space and ideas may express an emerging message on complexity and simplicity in social science that no-one could have predicted.