Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

We normally think of engineering, and particularly megaengineering, in terms of big iron: large-scale physical investments in the form of ports, bridges, highways, and dams. Science has its own versions, such as the Hubble Telescope, the CERN Large Hadron Collider, and the South Pole Research Station, each designed in its way to support a number of researchers with a facility that can be shared between them and engineered to high standards of reliability and robustness. Over the years, however, new materials of greater strength, along with various forms of miniaturization, have allowed a steady progress towards engineering solutions that are smaller and in many cases cheaper—towards smaller iron, as it were. New materials led to the vastly increased power of the modern jet engine, and to the light, fuel-saving design of the Boeing 787 Dreamliner. Most spectacularly, perhaps, the individual vacuum tubes and components of early computer circuits have been replaced with chips that integrate millions and even billions of components into a single unit less than a centimeter across that can be mass produced at very low cost. As a result, it has been possible to replace the big iron of the university central computer of the 1970s with a multitude of small machines distributed in the institution’s offices and laboratories.

The in-vehicle navigation system, sometimes called a satnav, provides a compelling case in point. Today one can purchase and install for roughly $100 a unit that will successfully track the vehicle, match the track to a digital representation of a road network, identify an address or point of interest as the trip’s destination, and provide detailed driving instructions. All of these services are provided by a package that is small enough to fit, unobtrusively, on the dashboard of a modern automobile.

Paradoxically, however, the name commonly assigned to such devices is a “GPS”, derived from the satellite-based Global Positioning System developed and managed by the US Department of Defense that provides the essential measurements of current vehicle position. The equally and arguably more important database representing the locations of streets and contained in the system is invisible and intangible to the average user, who is not therefore inclined to refer to the device as such, much to the frustration of the vendors of such databases (Navteq or TeleAtlas) whose brand consequently means essentially nothing to the average citizen. In short, we tend to think of services such as this in terms of their physical, tangible expression—not big iron, and not even iron, but nevertheless constructed of real, tangible materials. The bits and bytes of the database have no physical presence and thus little meaning to the user.

Even when the hardware and network connections are of significant physical size, their importance is still often unrecognized. Thus it is the GPS circuitry that dominates the public perception of a satnav, not the chips that perform the map matching and generate the visual displays. One of the largest buildings in the Olympic complex constructed for the 2008 Beijing games was an almost featureless cube with no obviously visible function. It housed the very elaborate and extensive computers, routers, and networks that were needed to manage the enormous flows of digital information from the site, and in the sense of this discussion was as much megaengineering as the instantly recognizable “bird’s nest” stadium.

In this chapter I seek to redress this imbalance by arguing that in today’s information economy the bits and bytes of digital systems are at least as important as society’s bridges and highways. More specifically, I argue that geographic information systems (GIS), and more generally the geospatial technologies, are just as important in their impacts on society as the traditional megaprojects, and that their long-term effects will be just as profound.

The next section discusses the nature of large scale investment in digital technology, or what is often termed cyberinfrastructure. This is followed by a discussion of geospatial technologies: their history, their role in modern society, and their likely development directions. The final substantive section discusses the impacts of these technologies on society, and the growing interest of society in participating more directly in their application.

2 Cyberinfrastructure

Parallels have often been drawn between today’s electronic communication networks, and specifically the Internet, and the impacts on previous generations of such major investments as canals, railroads, and telephone networks. Describing the Internet as the information superhighway, a term often attributed to Al Gore, makes the point perfectly, inviting us to compare the impacts of the Internet with those of the construction of freeways (in the US the Eisenhower National System of Interstate and Defense Highways), and implying that the massive changes of land use that resulted, with the development of new malls, hotels, and housing developments at freeway interchanges, the collapse of many traditional downtowns, and restructured commuting, were likely to be matched or exceeded by the eventual impacts of the Internet.

In the US the term cyberinfrastructure has been widely adopted, largely at the instigation of the National Science Foundation (NSF), to describe the role of digital technology in revolutionizing the way research is conducted. While NSF has often taken a leading role in the building of the US Internet, part of the latter’s power stems from its ability to integrate numerous subnetworks that have been constructed by other public agencies and by private investment. But the much-cited Atkins Report (NSF, 2003) defines cyberinfrastructure as reaching far beyond the communication network itself, as a “layer of enabling hardware, algorithms, software, communications, institutions, and personnel” that lies between a layer of “base technologies…the integrated electro-optical components of computation, storage, and communication” and a layer of “software programs, services, instruments, data, information, knowledge, and social practices applicable to specific projects, disciplines, and communities of practice.” The report sees this investment in infrastructure as nothing short of revolutionary in its impact on the way science is conducted, and on the potential for new discoveries and inventions; and vastly outweighing the impact of any single, more traditional big-iron investment. It describes the new science that is enabled by cyberinfrastructure as more collaborative, no longer requiring collaborators to be co-located; as more integrated given the ease with which researchers from different disciplines are able to collaborate; and as more computational, relying on simulation rather than analysis to study the complex systems and problems that increasingly require science’s attention.

While the term cyberinfrastructure has its strongest currency in US science, the same basic idea of information technology as megaengineering, with megaimpacts, has now invaded virtually all aspects of human activity in the developed countries. An increasing proportion of retailing takes place electronically, as does more and more of our communication, whether in the form of speech or email. More and more people obtain their news online, to the extent that many traditional print media, notably newspapers, are in danger of collapse. Online entertainment, in the form of participatory gaming, is now occupying a significant proportion of society’s leisure time.

Despite this the digital divide is alive and well, and it would be foolish to suggest that the impacts and benefits of cyberinfrastructure will ever be uniformly distributed around the world and throughout human society. The overwhelming majority of the human population, notably in the developing countries, currently has no access to computers or their communication networks. Great progress is being made, but in the constantly accelerating world of electronic technology it seems virtually impossible for the disadvantaged ever to catch up with the advantaged.

3 The Geospatial Technologies

3.1 Overview

Geospatial information can be defined as information about specified places on or near the Earth’s surface, and thus in the environments within which humans live and act. It can consist of statements about large areas, such as the population of California, or about narrowly defined points, such as the height of Everest, but in every case there is a link between some property and an associated place. To be operational, the associated area must be defined in latitude and longitude, or in some system that can be readily converted to latitude and longitude. Today the set of such systems includes street addresses, since it has become easy to convert them to latitude and longitude in most developed countries, a process known as geocoding or address matching. Indexes or gazetteers of recognized features such as states or lakes also exist, allowing properties associated with such features to be positioned in latitude and longitude; and in many countries there are recognized systems of formal coordinates such as national grids. One of the great successes of geospatial technology in recent years has been in making it almost trivially easy, cheap, and reliable to convert between these alternative systems of geographic referencing, and to embed these features in countless Web services. The general public uses these services, often without being aware of their inherent sophistication, in such daily activities as finding the locations of points of interest such as stores or hotels, acquiring driving directions, or planning travel.

Over the past few decades there has been rapid development in a number of technologies that create, process, or analyze geospatial information. GPS has already been mentioned, as a system for the rapid and accurate measurement of location. Various versions exist, some capable of determining location to millimeter accuracies. Another technology is satellite-based remote sensing, which dates in its civilian form from the early 1970s. Today a large array of Earth-imaging satellites are in regular orbits, owned and operated by many countries and corporations, and collecting and transmitting images at ground resolutions as fine as 62 cm. In the aftermath of the Wenchuan Earthquake of May 2008, for example, a large collection of fine-resolution images became almost immediately available to the Chinese authorities, including imagery acquired for very different purposes by US intelligence agencies.

The last and perhaps most important of these technologies is the geographic information system (GIS), a software package capable of performing a wide range of manipulations on geospatial information, including analysis, modeling, storage, visualization, and many other operations. Such packages are available in many different forms, designed for desktop computers, large scale servers, and hand-held devices, and supplied by commercial vendors, academic groups, and open-source communities. Today it is reasonable to assume that a GIS will be capable of performing virtually any conceivable operation on geospatial information.

The first GIS was developed in the 1960s (Foresman, 1998) to respond to a very specific requirement of the Canadian government: the calculation of measures of area from tens of thousands of hand-drafted maps based on field surveys. The federal government had established a committee to provide the provinces with detailed analyses of the Canadian land resource, including its current and potential uses. This would have been an enormously tedious, inaccurate, and labor-intensive task if performed by hand, but even in the primitive computing environment of the time it was possible to demonstrate that a computational solution was far preferable to a manual one in both costs and benefits. The maps in this case represented various forms of land use. But it was not long before other applications developed, in such areas as transportation and the gathering of the Census, and by the late 1970s a consensus had emerged that a wide range of applications could be served by a single, integrated software environment and a single approach to representing geographically distributed phenomena in digital form. The first commercial GISs appeared at that time and by the mid 1980s a substantial software industry had been established.

Geospatial technologies have found viable applications in virtually all areas of human activity. In research, they are now essential to any discipline that deals with phenomena on or near the surface of the Earth, from atmospheric science to criminology. A recent editorial (Nature, 2008) argued that there is no longer any excuse for not recording the exact location of any measurement or specimen collected from the environment, though vast numbers of specimens in our museums have only crudely recorded locational information. Geospatial technologies are used to track migrating birds and animals, to model and predict the effects of global climate change, and to study the emergence of residential segregation in cities (Goodchild & Janelle, 2004).

In the commercial world, geospatial technologies are essential for the routing and scheduling of delivery and collection vehicles, for keeping track of the distributed assets of utilities, for improving agricultural production through precision agriculture, and for managing cutting and silviculture in forestry operations. In government, they are essential in support of planning, data-gathering, and assessment.

However, the most spectacular recent growth has come in the use of geospatial technologies by the general public. One of the first such services was MapQuest, a site that could generate driving directions to specified destinations. After the release of Google Earth in 2005, and later Google Maps and Microsoft Virtual Earth, it became possible for any user, even a child of ten, to interact with detailed geospatial data and tools. This democratization of GIS (Butler, 2006), or at least of some of its basic functions, and the exposure of the general public to the wealth of geospatial data available from remote sensing and GPS, led to a dramatic increase in awareness and engagement. Google has recorded over 300 million downloads of the Google Earth client. More significantly, the release of the Application Programming Interface for both Google Earth and Google Maps in 2005 led to an explosion in the range of applications, as it became possible for people with minimal computing skills to create their own mashups of new data with the imagery and maps of Google Earth and Google Maps, and to publish the results online. Today Google Maps is used as the underlying mapping engine by an enormous variety of services, from hotel reservations to retailing, and the Google Earth mashup has become popular as a way of disseminating the results of scientific research. While GIS has always had a reputation for being difficult to use, and previous efforts at GIS education focused on the training of an elite cadre of professionals, today virtually anyone with access to the Internet can perform sophisticated manipulations of geographic information. The central educational question has shifted from “What does a GIS professional need to know?” to “What does everyone need to know?” to use these technologies effectively, ethically, and responsibly.

3.2 Development Directions

Past evidence suggests that researchers can be spectacularly unsuccessful at anticipating major developments in the geospatial technologies. In their introduction to the second edition of Geographical Information Systems (Longley, Goodchild, Maguire, & Rhind, 1999), the editors commented that the most glaring omission in their first edition, published in 1991, was any reference to the Web, which began its spectacular growth and impact in 1993 with the release of the first public browser, Mosaic. By the end of 1993 Xerox’s Palo Alto Research Center had published the first Web-based map services and the first Web-based services for finding and obtaining geospatial data online began to appear in 1994. Within a few years the Federal Geographic Data Committee and the Open Geospatial Consortium had begun the process of developing the standards and specifications that would support today’s complex of Web-based services, often known as the GeoWeb or Geospatial Web (Scharl & Tochtermann, 2007).

Nevertheless, it is interesting and useful to speculate on what may emerge over the next few years. What follows is of course a highly personal and idiosyncratic analysis and I fully expect a range of different views from colleagues in the research community.

First, GPS is increasingly embedded in a wide range of technologies, from mobile phones to vehicles, enabling them to know their locations to meters. Computers are increasingly location-enabled through online services that convert Internet addresses to latitude and longitude, and the latest Microsoft operating systems do this automatically, so that computers finally know not only what time it is, but where on the planet they are currently located (or more precisely, currently connected to the Internet). RFID (Radio-Frequency Identification) also provides the basis for determining location, through the use of small sensors that respond to readers, just as aircraft constantly identify themselves to air-traffic controllers. RFID is the basis for tracking goods from production to sale, for tracking cars through automatic toll gates, and for the congestion charges now being leveled in some cities. Surveillance cameras that can identify faces now offer the potential of tracking individuals as they move around densely monitored areas such as Central London.

All of these developments suggest that in future it will be possible to know where everything is at all times. The implications for personal privacy are profound, of course, but so are the benefits of being able to track parolees, pets, and stolen cars, and victims of a major catastrophe. Clearly one would not want to place an RFID tag on every brick in a building, but one might well want to do so with every farm animal in a tightly managed agricultural environment such as the Netherlands, or with every passport issued to a country’s citizens.

Second, the geospatial technologies are and have always been primarily two-dimensional in their representation of the geographic world. Remote sensing provides two-dimensional images, and while three-dimensional representations can be constructed from pairs of images through photogrammetry, they are limited to the outer surfaces of structures and cannot deal with overhangs, creating models that are often loosely described as “2.5D.” GPS is able to determine elevation as well as horizontal location, but less accurately, and cannot do so in places where satellite signals are blocked, such as inside structures. And although progress has been made in recent years, GIS is also dominated by two-dimensional representation, reflecting its historic roots in capturing the contents of paper maps (Goodchild, 1988).

In future we should imagine a world in which the geospatial technologies will become fully enabled in the third spatial dimension, and in which systems for navigating indoors will be as common and widely used as the current systems for navigating the two-dimensional outdoors. Retailing and the service economy will provide one strong motivating application by supporting the finding of destinations within the complex three-dimensional structures that increasingly typify urban shopping. Wayfinding within airports, mass transit systems, and universities are other obvious applications, as are the tracking of staff, patients, and other assets within hospitals.

Third, geospatial technologies are already enabling the average person to become not only a consumer but also a producer of geospatial data. The phenomenon known as volunteered geographic information (VGI) (Goodchild, 2008a), a form of user-generated Web content, extends now to a wide range of geographic information types, from street maps to environmental quality, and to a wide range of scales from the global to the neighborhood. Thousands of individuals around the world are actively involved in the creation of VGI in their spare time, with no training in geography or cartography, with no obvious source of reward, and with no guarantee that what they produce is accurate. The question of quality is clearly key, since we traditionally place great trust in the official, authoritative sources of geographic data. However there is ample evidence that volunteered information, while missing the kinds of quality guarantees provided by official agencies, is in practice of equal or higher quality in many instances (Goodchild, 2008b).

VGI is particularly helpful when it can take advantage of the presence of humans as observers and interpreters of local conditions and for properties that fine-resolution remote sensing is unable to detect. Early detection of change and early evaluation of damage from disasters are two areas where citizens with their dense geographic distribution are able to provide information that officialdom would find impossibly expensive or time-consuming to collect. While he or she may be of little help in classifying and mapping local soils, the average citizen is an expert in the naming of local features, measuring simple parameters of the environment, and even with a little training in counting local populations of birds or plants.

Finally, geospatial technologies are increasingly able to detect and map phenomena in real time. Traditional mapping has been a slow process and maps may in some cases be years out of date before they are published, distributed, and used. But sensors are now available to monitor and sample properties of the environment at frequent intervals, and Web-based technologies allow such data to be assembled and disseminated almost instantaneously. In future, then, it is conceivable that we will know the complete state of the world at all times. Loop detectors, cameras, GPS, vehicle probes, and RFID can potentially tell us the real-time state of a transportation system, allowing citizens to know the level of congestion and associated pollution at every point in an urban road network, or the precise arrival time of any transit vehicle. Trucks arriving at ports to collect containers could be precisely scheduled, avoiding the complex process of restacking containers to find and load the correct one, and reducing the pollution created by idling trucks. The detailed state of the environment and the state of human health are other arenas where access to real-time geospatial data, and associated monitoring, could provide real benefits to society.

4 The Impacts of Geospatial Megaengineering

Metrics of the total commitment to geospatial technologies are hard to come by. In the early 1990s the U.S. Office of Management and Budget conducted a survey of annual investments in the acquisition of geospatial data, and showed a total of over $4 billion. But that figure excluded all of the remote-sensing programs of NASA, the GPS program of the Department of Defense, and many others, and was concerned only with data acquisition. We know that ESRI, the leading vendor of GIS software, has an annual turnover of roughly $1 billion. But there are no assessments of the amount of time citizens spend using geospatial technologies, or the amount of time invested in VGI.

Nevertheless, it seems clear that despite its diffuse nature and comparative invisibility, the sum total of activity centered on the geospatial technologies is a significant proportion of GNP in the developed countries and that it also occupies a significant proportion of volunteered time. More broadly, information technology now consumes a measurable and growing proportion of the US energy supply; represents an enormous public and private investment in communications infrastructure; and consumes a large and increasing share of household, corporate, and governmental budgets.

The geospatial technologies have some very unique and specific impacts on human behavior, however. It is helpful at this point to distinguish between virtual and augmented realities. In a virtual reality (VR), computing technologies are used to replace the user’s real geographic environment with one created entirely from a database. The virtual environment could be immersive, so that all signals from the real geographic environment could be excluded. At the University of California, Santa Barbara, for example, an immersive environment consisting of a 30-ft (9.1 m) diameter sphere with projected 3D vision and sound, the Allosphere (http://www.mat.ucsb.edu/allosphere/), recently became available for interdisciplinary research. In an augmented reality (AR), on the other hand, information technology serves to augment rather than replace the signals coming from the environment. By definition, then, an AR requires the user’s actual location and the location represented in the database to be coincident; whereas in VR they must by definition be disjoint.

An AR environment may consist of a heads-up display in which information from the database is superimposed directly on the user’s field of view or it may consist of no more than the screen of a mobile phone. In both cases the role of AR is to augment what the user can see, touch, hear, feel, and smell, by providing supplementary information through the visual or auditory channels. AR can also play a vital role in replacing a missing sense, as for example in applications that assist the visually impaired to navigate through complex environments without sight by providing audible directions (see, for example, Golledge, Loomis, Klatzky, Flury, Yang, 1991). AR can inform a construction project of the positions of utilities under a street, or inform tourists of the locations, menus, and reviews of nearby restaurants. It can provide emergency personnel with vital information about the hazardous chemicals stored in a building, or about the real-time locations of other rescue workers in a smoke-filled structure. The applications of AR to human activities are limited only by our imagination.

Nevertheless it is the long-term impacts of AR that are likely to be the most profound. Consider, for example, a tourist in a strange city searching for a coffee shop. Traditionally such services have had to advertise themselves visually, through signage or the adoption of conspicuous locations. But AR-enabled customers can easily find wayfinding instructions to the nearest outlet using online databases. Thus conspicuous locations and intrusive signage are no longer needed, and services can retreat to the cheaper, less obvious locations. In such a world services would no longer need to pay a premium for locations on street corners and main streets, leading to a substantial restructuring of the retail landscape.

Real-time knowledge of the state of transportation networks will allow drivers and passengers to respond quickly to congestion, construction, and other interruptions. An interesting pattern may emerge in such situations, as individuals decide whether to reroute, or to hold course on the grounds that conditions will improve as others leave the route. In principle the result of such behavior should be instability, because of the speed with which information passes around the system; perhaps information technology has played a similar role in the instabilities of the world economy that became almost uncontrollable in late 2008.

More broadly, geospatial technologies have greatly increased the ability of individuals to see what is happening in their own neighborhoods and around the world. Google’s decision to provide frequently updated, fine-resolution imagery of the Darfur region undoubtedly led to a greater sense of awareness of the atrocities being committed there. At the other end of the spectrum many local communities are employing geospatial technologies to help them understand and manage their own neighborhoods, raise awareness of potential problems, and engage with planning authorities.

5 Conclusions

I have argued in this essay that the geospatial technologies deserve the status of megaengineering. While they are highly dispersed, often miniaturized to the point of being virtually invisible, and produced by a complex array of companies, individuals, and agencies, many of them acting essentially independently, the sum total of this investment, nevertheless, combines to produce a substantial set of impacts on human activity.

The geospatial technologies largely evolved in a world of two spatial dimensions and with a focus on those aspects of the geographic landscape that are essentially static—the aspects such as topography, soils, and land cover that are the focus of traditional mapping. Recently, however, there have been major advances in our ability to characterize and monitor the world in real time, through the use of networks of sensors and through the willingness of individuals to volunteer information through the Web. The third spatial dimension is also becoming more important in a range of applications and in future it seems likely that the geospatial technologies will operate in the full four dimensions (three spatial dimensions plus time) of the geographic environment.

In the human body the various parts develop and function largely independently. The functions of the liver, for example, are very different from those of the foot or the head. The circulatory system reaches all parts of the body, making it difficult to target specific sites such as tumors with drugs introduced into the bloodstream. Only the nervous system is spatial, telling the brain exactly where pain is felt. By analogy, the geospatial technologies acquire, integrate, process, and distribute information that addresses not only what but where, and have consequently been argued to form a nervous system for the planet. What is missing at the global scale, of course, is the equivalent of the brain that integrates incoming signals, stores and processes them, and executes its decisions by passing signals back through the nervous system to control muscular action. Great progress has been made in the past few decades in integrating geospatial data, but we have not yet begun to build the kinds of integrated decision-making systems that can guide the planet into an increasingly uncertain future.