Abstract
This chapter sets the basis for a probabilistic approach to spatial complexity, by focusing on: (a) species richness (the number of different qualitative classes /species/ population types/ categories /covers/colours/types) occupying the spatial extent of a surface, (b) entropy (measured i.e. by Shannon’s formula, and reflecting the degree of equidistribution of these classes on the basis of their relative participation percentages) and (c) randomness of allocations of these classes. Evidently, the higher the number of different classes within a surface, space, or spatial object, the higher its spatial complexity. Further, the more disordered their allocation, the higher the spatial complexity also. These considerations however, ramify to a series of questions, i.e.: when is a string of symbols a random one? what alternative definitions of random strings are there and how do they impact spatial randomness?
From Chaos became Erebus and the black Night;
From Night were born the Aether and the Day,
“ἐκ Χάεος δ᾽ Ἔρεβός τε μέλαινά τε Νὺξ ἐγένοντο:
Νυκτὸς δ᾽ αὖτ᾽ Αἰθήρ τε καὶ Ἡμέρη ἐξεγένοντο”
(Hesiod, c.750 b.C., “Theogony” 104,123-125)
Access provided by Autonomous University of Puebla. Download chapter PDF
Keywords
- Spatial complexity
- Complexity and randomness
- Entropy and Complexity
- Landscape Diversity
- Algorithmic complexity
- Map Complexity
- Landscape complexity
1 Spatial Entropy Versus Complexity
“A bit of entropy is a bit of ignorance”
(Seth Lloyd 2007, p.80)
While analyzing spatial complexity, one should always look for the simplest methods possible, despite the fact that spatial settings are “interesting” exactly when they are not simple (although they may be pleasant nevertheless). Understanding the relationship between entropy and complexity is a multi-faceted issue cropping up in many domains of scientific enquiry. This relationship has attracted the attention of physicists and computer scientists time and again. Generally, the higher the number of different classes (species/populations/categories/covers/colours/types…) within a surface or spatial object, the higher the spatial complexity. But this also depends on the scale under which the object is examined, as different classes or objects can be packed together so densely that they might even accept an easy synoptic description (Fig. 4.1).
The number of classes present on a spatial surface is only one indication of its spatial complexity; the relative participation of each one of these classes in the object examined is another. The more different classes participate to cover the surface, the higher the spatial complexity. It is common in the scientific literature of ecology, cartography and geography to define the entropy of a map by using the Shannon formula (Shannon and Weaver 1949; Forman and Godron 1986):
where V is the total number of “colors” present in the map depending on the map examined (i.e. if it is geographic, then V represents land cover classes, land use types, landscape types, population types etc) and Qi is the percentage of occurrence of each color i in the map’s area, with \(\sum\nolimits_{i = 1}^{V} {Q_{i} = 1}\).
If the entire map is covered by one color only, then H=0. With an increase in the number of colors V, entropy increases, but the entropy formula does not take into account the spatial allocation of these classes, as the maps (b) and (c) show (Fig. 4.2).
Thus, entropy is only a relative (although highly important) indicator of spatial complexity. Let us see two other examples (Fig. 4.3). The maps of the two upper rows have higher entropy than those of the two bottom rows: H=1 versus H=1.5 respectively. The two upper rows show maps with two colors only, while the two bottom rows show maps of equal size with three colors. The maps with three colours have a higher entropy than the maps with two colours, regardless of the spatial allocation of these colours. These also show why entropy alone is not a sufficient criterion of spatial complexity, since different spatial configurations (and therefore different spatial complexities) can correspond to exactly the same entropy values (Papadimitriou 2012).
In biogeography and ecology, the term “diversity” is widely used. In its general form, diversity means the identification of the characteristics of a map, as reflected by the number of different classes of elements in it (i.e. as reflected by the “diversity” of its elements; a term used in spatial ecology) and consists a recurrent theme in ecological research (Clarke and Warwick 1998; Anand and Orloci 2000; Petrovskaya et al. 2006; McShea 1991; Magurran 2004). The connection between diversity and complexity was discussed in ecological context by Zhang et al. (2009), who found that an increase in Shannon diversity appeared concurrently with increasing “landscape complexity”. Yet, other studies based on field observations have shown that spatial complexity and entropy (diversity) are not always positively correlated. Species richness and complexity are not always correlated either (Azovsky 2009, p.308).
Besides entropy, another index needs to be parenthetically mentioned, which is contagion. The “contagion” index was proposed by O’Neill et al. (1988), Turner (1989; 1990) and Turner and Ruscher (1988) to characterize spatial (landscape) patterns in landscape maps:
where n is the total number of cover types in the geographical space (or landscape), Qij is the probability of cover type i being adjacent to cover type j, and 2nln(n) is the maximum contagion, which is attained if there is an equal probability of any two landscape types being adjacent to one another.
The problem with this index is that it may also yield the same values for entirely different spatial configurations (and hence, for configurations of entirely different topology and spatial complexity). As shown in six example 3×3 binary maps (Fig. 4.4), the 1st binary map has only one black cell, the 2nd has two, while the 3d has 3 and the 4th has four. Yet, all these maps have the same contagion index. The same applies to the binary maps 5 and 6: both have the same contagion, although their spatial configurations are completely different. But different spatial configurations most likely have different spatial complexities (this will be examined in detail in next chapters). Consequently, contagion can not be taken as a measure of spatial complexity.
However, it is not only maps that can demonstrate the central role of entropy. As Steinhaus (1954) suggested, there is an interesting association of the concept of entropy to the complexity of a curve: the number of intersections of a plane curve and a random line intersecting that curve is equal to 2L/C, where L is the curve’s length and C is the length of the boundary of the curve’s convex hull. Plugging this into Shannon’s entropy formula and defining (arbitrarily) as the “temperature” of the curve the quantity
it is possible to derive a thermodynamic analogue of curve complexity. Supposedly, this link between geometry and thermodynamics gives a measure of the “entropy” of a curve (Mendes France 1983; Dupain et al. 1986):
The entropy of a curve is also an indicator of its complexity, i.e. if the curve is described by a polynomial of degree d, then its entropy is at most 1+log2d (Stewart 1992).
2 Spatial Randomness and Algorithmic Complexity
To a land of deep night, of disorder and utter darkness,
where even light is like darkness
“אֶ֤רֶץ עֵיפָ֨תָה ׀ כְּמֹ֥ו אֹ֗פֶל צַ֭לְמָוֶת וְלֹ֥א סְדָרִ֗ים וַתֹּ֥פַע כְּמֹו־אֹֽפֶל׃ פ”
(The Bible, Job, 10.22)
The study of spatial randomness of distributions of some population in any spatial dimension (Fig. 4.5) is a vibrant field of research, particularly in the context of spatial random processes, for which a basic introduction can be found in Adler (1981) and a more elaborate and updated in Hristopoulos (2020). Expectedly, spatial complexity depends on spatial randomness, but, as it turns out, it is difficult to assess spatial randomness in terms of algorithmic complexity because there are several alternative approaches to deciding whether a string of symbols is random or not (diverging approaches even about the one dimensional case).
In an early approach, von Mises (1919) defined an infinite binary string as random, so long as it has as many 0s as 1s at its “limit”. Adopting a different approach, Church (1940) defined a random string as every infinite string of which the digits can not be given by a recursive function. Later, Martin-Löf (1966a, b) suggested that random infinite strings are those that satisfy all statistical tests for randomness. Levin (1973; 1974) and Chaitin (1974; 1975) defined random strings x as those that are endowed with a maximum Chaitin-Levin complexity, meaning there exists a number c, such that for every n, this complexity is higher than the difference n-c. Following the most widely known definition by Kolmogorov (1965) however, an infinite string x is random, if its Kolmogorov complexity K(x) is maximum.
An alternative approach to randomness is Bennett’s concept of “logical depth” of an object (Bennett 1973, 1982, 1986, 1988a, b, c, 1990), measuring the time required to compute a minimal program, with its organisation of the studied object. The “logical depth” of a string is the calculation time needed (by a universal machine), in order to produce it from its minimum Kolmogorov description. Bennett’s definition has had some applications in physics and biology (Bennett 1986, 1988b) and one of the implication of Bennett’s theory is that the possibility to encounter by chance an object with large logical depth is very small. Plausibly, the possibility that Bennett’s definition might be used to assess spatial complexity should not be precluded. However, the problem with the definition of randomness becomes explicit even by considering simple cases. Consider, for instance, a binary string composed of 0s and 1s. Let the sum of the string’s elements be N. If the string is random in the Church sense, then:
But there are strings satisfying this equation at the limit, which are clearly non- random (i.e. the string 0101010101010101010101…). This simple example illustrates why the riddle of defining string randomness remains unsolvable even for one-dimensional objects. Further, many strings that happen to satisfy the criteria of Von Mises and Church fail to do so for Martin-Löf’s criteria. But characterizing a string as random in the sense of Kolmogorov means that it accepts no shorter algorithmic description and therefore it has no regularity at all and passes the statistical tests required by Martin-Löf’s definition.
Yet, diverging opinions with respect to strings can be seen in the case of 2d surfaces also. For instance, complexity is often perceived to be a condition “in between order and chaos”, in such a way that (i.e. in Fig. 4.6), neither map a (ordered) nor map c are “complex”, because a is ordered and c is random. But map b is perceived as “complex”, because it is found in between order and randomness and displays distinct patterns (such as clumps of the same colour in B).
Spatial stochasticity can be created by a 2d Brownian stochastic motion. Here, for the purpose of illustration, it is plotted for 400,000 time steps (Fig. 4.7). As the east-west Brownian motion is tantamount to the north-south motion, the joint probability of being at a position u along the horizontal axis and v along the vertical axis is the product of the Gaussian probability densities of the respective motions:
When a rugged surface (noisy, with “ups” and “downs”) is examined, its spatial complexity will be even more dependent on the level of spatial resolution at which it is examined. At this point enter issues of choice and technical capabilities: one simply has to experiment with surfaces that can be constructed on the basis of Gaussian-type functions describing a terrain (Fig. 4.8) and follow the general type (ai,bi,ci are constants):
In a rasterized map, a string of symbols can represent a strip of squares (or parallelograms) of a spatial object. The algorithmic complexity of a string of symbols is equal to the minimum description of this string. In one dimension, if a string of symbols is random, then it has maximum algorithmic complexity; equivalently, if the allocation of colors/covers is random over a map (Fig. 4.9), then it has maximum complexity (with respect to any other such map of the same size and with the same amount of covers/colors). If it has repeating patterns, then its description can be reduced to a simpler one, by taking advantage of these repetitions, and in that case, it has a lower complexity than an incompressible string of symbols.
References
Adler, R. J. (1981). The Geometry of Random Fields. New York: Wiley.
Anand, M., & Orloci, L. (2000). On hierarchical partitioning of an ecological complexity function. Ecological Modelling, 132, 51–62.
Azovsky, A. (2009). Structural complexity of species assemblages and spatial scale of community organization: A case study of marine benthos. Ecological Complexity, 6, 308–315.
Bennett, C.H. (1973). Logical Reversibility of Computation. IBM Journal of Research and Development, 525–532.
Bennett, C. H. (1982). The thermodynamics of computation—A review. International Journal of Theoretical Physics, 21(12), 905–940.
Bennett, C. H. (1986). On the nature and origin of complexity in discrete, homogeneous locally-interacting systems. Foundations of Physics, 16(6), 585–592.
Bennett, C. H. (1988a). Demons (pp. 91–97). Janvier: Machines et Thermodynamique. Pour la Science.
Bennett, C. H. (1988b). Logical Depth and Physical Complexity. In R. Herken (Ed.), The Universal Turing Machine: A half-century survey (pp. 227–257). Oxford: Oxford University Press.
Bennett, C. H. (1988c). Notes on the History of Reversible Computation. IBM Journal of Research and Development, 32(1), 16–23.
Bennett, C.H. (1990). How to define Complexity in Physics, and Why. In W.H. Zurek, (Ed.) Complexity, Entropy and the Physics of Information (pp. 137–148). Santa Fe Studies in the Sciences of Complexity, vol. VIII. New York:Addison-Wesley.
Chaitin, G.J. (1974). Information-theoretic limitations of formal systems. J.A.C.M., 403–424.
Chaitin, G.J. (1975). A theory of program size formally identical to information theory. J.A.C.M. 22, 329–340.
Clarke, K. R., & Warwick, R. M. (1998). A taxonomic distinctness index and its statistical properties. Journal of Applied Ecology, 35, 523–531.
Dupain, Y., Kamae, T., & Mendes France, M. (1986). Can one measure the temperature of a curve? Archive for Rational Mechanics and Analysis, 94, 155–163.
Forman, R. T. T., & Godron, M. (1986). Landscape Ecology. New York: Wiley.
Hristopoulos, D. (2020). Random Fields for Spatial Data Modeling. Cham: Springer.
Kolmogorov, A. (1965). Three approaches to the quantitative definition of information. Problems of Information Transmission, 1, 1–17.
Levin, L. A. (1973). On the notion of random sequence. Soviet Mathematical Doklady, 14, 1413.
Levin, L. A. (1974). Laws of information conservation and aspects of the foundation of probability theory. Problems of Information Transmission, 10(3), 206–210.
Lloyd, S. (2007). Programming the Universe. London: Vintage.
Magurran, A. E. (2004). Measuring Biological Diversity. Oxford: Blackwell.
Martin-Löf, P. (1966a). On the concept of a random sequence. Theory Probability Applications, 11, 177–179.
Martin-Löf, P. (1966b). The definition of random sequences. Information and Control, 9, 602–619.
McShea, D. (1991). Complexity and evolution: what everybody knows. Biology and Philosophy, 6(3), 303–324.
Mendes France, M. (1983). Les courbes chaotiques. Courrier du Centre National de la Recerche Scientifique, 51, 5–9.
O’Neill, R. V. (1988). Indices of landscape pattern. Landscape Ecology, 1(3), 153–162.
Papadimitriou, F. (2012). The Algorithmic Complexity of Landscapes. Landscape Research, 37(5), 599–611.
Petrovskaya, N. S., Petrovskii, S. V., & Li, B. L. (2006). Biodiversity measures revisited. Ecological Complexity, 3, 13–22.
Shannon, C. E., & Weaver, W. (1949). The Mathematical Theory of Communication. Urbana: University of Illinois Press.
Steinhaus, U. (1954). Length, shape and area. Colloquium Mathematicum, 3, 1–13.
Stewart, I. (1992). Another fine Math you’ ve got me into. New York: Dover.
Turner, M. G. (1990). Spatial and temporal analysis of landscape patterns. Landscape Ecology, 4(1), 21–30.
Turner, M.G., & Ruscher, C.L. (1988). Changes in landscape patterns in Georgia, U.S.A. Landscape Ecology, 1(4), 241–255.
Turner, M. G. (1989). Effects of changing spatial scale on the analysis of spatial pattern. Landscape Ecology, 3(3/4), 153–162.
Von Mises, R. (1919). Grundlagen der Wahrscheinlich kietsrechnung. Math.Z., 5, 100.
Zhang, F., Tashpolat, T., Ding, J.-L., Wang, B.-C., Wang, F., & Mamat, S. (2009). The change of land use/cover and characteristics of landscape pattern in arid areas oasis: A case study of jinghe county, xinjiang province. Shengtai Xuebao/Acta Ecologica Sinica, 29(3), 1251–2126.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Papadimitriou, F. (2020). The Probabilistic Basis of Spatial Complexity. In: Spatial Complexity. Springer, Cham. https://doi.org/10.1007/978-3-030-59671-2_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-59671-2_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-59670-5
Online ISBN: 978-3-030-59671-2
eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)