1 Introduction

A map is not the miniaturized reproduction of the Earth surface; it is undeniably an abstraction of the Earth, a model which strongly depends on the cartographer’s interpretation of the Earth. A two-dimensional map is a static, geometrically accurate (at chosen level of accuracy) representation of three-dimensional real space. The process of map creation in traditional cartography has limitations: some of them have been overcome by new technologies and the transition from paper to digital representation of the Earth; other limitations remain unavoidable.

For example, one limitation is the static nature of paper maps: a specific scale defines the map accuracy and the amount of information that is possible to see; to obtain more detailed and accurate information, another map with a larger scale is required. Instead, on digital maps the information is portrayed as a function of the scale, so dynamically a map is generated as the scale changes using cartographic generalization, where the map content—the different features, and the detail with which they are represented—is continuously adjusted according to the corresponding accuracy. It is however to strongly underline that map accuracy is a limitation impossible to eliminate, since it is based on measures having an intrinsic error; although new technologies could minimize the error, they will not eliminate it.

Another limitation is the cartographic deformation: the mathematical transformations used to portrait the Earth surface on a plain—the so-called map projections—create obvious distortions of the real shape of the Earth in terms of distance, direction, scale, or area variations; new technologies such as the virtual globes aim to provide a less deformed representation, as their rounded shape allows to explore the Earth in three dimensions while streaming satellite imagery, elevation and other data from the Internet. They are easy to use and offer a realistic experience, since users have all the freedom to move around the globe and interact with it (Elvidge and Tuttle 2008). Virtual globes are getting very popular due to the increased availability of geospatial data and the possibility to overlay thematic or photo-realistic information. Many of them are available, being Google Earth and NASA World Wind are the most popular (Brovelli et al. 2013a).

A third limitation has to do with the way spatial data is modeled and the purpose of the map production. For example, Open Street Map OMS (https://www.openstreetmap.org) which is a collaborative project to create a free online editable map of the world, is designed primarily for routing, therefore roads are modeled as vector lines; while in a topographic database—specially 1 and 2 k database—the road width is an important attribute, so roads in this case are modeled as polygons. The road widths visible in OSM are just an approximation based on the road type, which support map interpretation, but is not metrically correct. This does not mean that the OSM is wrong, its means that the road width is not part of the model chosen by the OSM creators. This limitation creates challenges for spatial data integration in the present and in the future of cartography.

The next sections examine the concepts—such as Digital Earth, Big Geo Data, Internet of Things and Geospatial Web—that use computer technologies and cartographic knowledge to give birth to a new era of cartography, where paper maps have evolved into multilayered and multidimensional ones.

2 The Digital Earth

Digital Earth (DE) is a concept that goes beyond the use of digital maps or virtual globes to visualize the Earth surface and perform simple data retrieval tasks. For Janowicz and Hitzler (2012), DE is a distributed knowledge engine and question answering system that supports scientists beyond mere data retrieval: it is about the exchange, integration, and reuse of heterogeneous geo data. The vision of DE was proposed by Al Gore (1998) as a multidimensional and multi-resolution model of the planet, with the goal of visualizing the huge amount of geo-referenced information related to the physical and socio-economic environment, based on the needs of different actors: scientists, decision makers, communities, citizens, among others. DE is also a multitemporal and multi-layer information facility (Goodchild et al. 2012), allowing users to navigate not only through space but also through time to access historical data and future predictions based on social and/or environmental models. DE information is updated in real time, thanks to sensor observations; and most importantly, information is (or can be) interconnected.

In a 2011 workshop organized in Bejing by the International Society for Digital Earth (ISDE) an attempt was made to re-evaluate the vision of DE in the light of the many developments in the fields of information technology, data infrastructures, and Earth observation that have taken place since Al Gore’s DE vision in 1998 (Craglia et al. 2012). Participants identified two main features that DE should have: the first is the participation, interaction, and collaboration among users—especially citizens—involving concepts such as crowdsourcing and citizen science; the second is the need to provide information about relationships, networks, and other activities that occur on our Earth. To couple these two key features, DE needs to provide the necessary technologies and information produced by appropriate analytical tools to diverse users who participate in activities and modeling of the Earth.

The DE offers a platform for free access and participation: now is possible through current devices (computers, tablets or smartphones, and in the next future it will be possible through new apps/devices (not invented yet), to gather data in a collaborative manner. Virtual Globes are being as a base for participatory GIS (Brovelli et al. 2013b; Wu et al. 2010).

The vision of DE involves a wide range of disciplines related to cartography (remote sensing, geographic information system, global positioning system, Internet and the World Wide Web, simulation and virtual reality, etc.) (Guo et al. 2010). This poses a problem because different scientific disciplines use the same terms while the underlying meaning often differs to a degree where they become incompatible (Janowicz and Hitzler 2012).

The combination of different disciplines along with an increasing number of users and devices represents a challenge for the DE, which needs to integrate them in a functional way. Delfos et al. (2014) outline a solution through the use of adaptive profiles that are aligned to the finite number of states a system can adopt, rather than the limitless range of user or environment characteristics that cannot be adapted to. Each profile consists of a combination of adaptive states comprising functionality, information detail, or technical demands to optimize for individual users or technical environments. Such contextual information applied to DE technologies can enhance the user experience and be customized to a wide range of mobile devices used for compatible and adaptive access of web mapping systems.

Besides the big quantity of users, devices, technologies, and disciplines, the DE represents also a huge amount of data and information continuously growing, which is now called Big Data. DE could be considered Big Data, or it could be interpreted as a discipline based on the growth of Big Data. Theoretical frameworks and data systems for DE discussed by Guo et al. (2014) proved that Big Geo Data is a noticeable characteristic of DE.

3 Big Geo Data

Since Doug Laney published a research note titled “3D data management: controlling data volume, velocity, and variety” (Laney 2001) the “3Vs” have become the three defining dimensions of big data, even if the term itself was not defined in Laney’s note. Volume of Big Data refers to the size of data sets, which is increasing over time, as well as their relationships, creating a global graph of connected data. Variety refers to the number of sources and types of data, which is increasing as well. The combination of different sources and the integration of dissimilar formats (such as video, audio, photo, and plain text) allows a more holistic vision of the Earth, but raises new issues in data integration and semantic interoperability. Finally, Velocity is about the speed at which data is created and updated. The number of data sources is increasing rapidly, and some of these sources create near real-time data. This higher temporal resolution needs faster processing, namely, a reduced time to filter and analyze relevant data.

According to the 2013 IBM Annual Report (IBM 2013), 2.5 billion gigabytes of data is created every day, and 80 % of it is “unstructured” data (everything from images, video, and audio, to social media and a blizzard of impulses from embedded sensors and distributed devices) which are geo-referenced or can be geo-referenced. It is an incredible amount of data, corresponding approximately to a stack of DVDs that cover the distance from the Earth to the Moon (Gloub 2011). In the past, data acquisition was the main issue; now the challenge is to find an effective solution to manage this data, and to extract useful information from it.

The case of geo-referenced Big Data, or Big Geo Data, is no different. One of the Big Geo Data sources are satellites. Just limiting to earth observation, it is worth to remember that, in the past, optical satellite imagery could only reach accuracies of several tens of meters; nowadays remote sensing data reaches accuracies of the order of decimeters; an example is the new World View- 3 Digital Globe,Footnote 1 which has a panchromatic accuracy of 31 cm and average revisit time of less than 1 day, capable to collect up to 680,000 km2 per day. There are approximately 1100 public and private active satellites (Ritter 2014) with a wide variety of functions: GNSS navigation, weather forecasting, national defense, science and agriculture, crop and drought areas monitoring, accumulating millions of bytes of information every day.

Fixed and mobile in situ sensors (satellites, aircrafts, webcams, UAVs, or even citizens) used for environmental, traffic, health and industrial process monitoring, among other applications, constitute another source of Big Geo Data (see Fig. 1). These sensors create a network of many spatially distributed devices that observe the Earth, like a sensible Earth’s skin that we could use to monitor in real time the Earth surface. Location-based social networks such as Twitter, Facebook and even Flickr create huge digital data sets of collective behavior online; 500 million Tweets are sent per dayFootnote 2 and 1.83 million photos per day (in average) are uploaded on Flickr.Footnote 3 Other high-volume geo data sources include Volunteered Geographic Information, Smart Dust, complex transportation simulations, historical records, data made public by the government, and so forth.

Fig. 1
figure 1

Sensors as sources of Big Data

As a consequence of the intrinsic characteristics of Big Geo Data, at least two issues should be considered for future research. First, the gathering and geoprocessing of Big Geo Data are very computationally intensive; hence, it is necessary to integrate high-performance solutions, preferably internet based, to achieve the goals. Second, the problems of heterogeneity and inconsistency in geospatial data are well known and affect the data integration process; but are particularly problematic for Big Geo Data. Therefore, the optimization of feature-matching procedures will be one of the most challenging components in Big Geo Data integration (Gao et al. 2013).

The reason why the concept of Big Geo Data is getting so popular right now is partly because of the connection to the Internet of things (IoT) concept that is going to be explored in the next section. IoT is characterized by things being given IP addresses and connected to the Internet; many things or entities are expected to be connected to the Internet in the coming years, and they are expected to generate considerable amounts of data as well.

4 The Internet of things and the Internet of Places

IoT refers not only to things that we have control of, such as smartphones or tablets, but also to objects that are connected to the Internet and gather information from our surroundings. Things or objects vary from the refrigerator and the heating system to advanced meteorological stations or traffic portals (Kjems 2014). In the near future, objects will be connected and they will communicate through internet. Predictions are that 25 billion devices will be connected by 2015, and 50 billion by 2020;Footnote 4 cities will spend $41 trillion in the next 20 years on infrastructure upgrades for IoT, according to Intel.Footnote 5 According to location and time, objects become recognizable and acquire intelligence because they can transmit data of themselves and access to aggregate information from other objects. A typical example used to illustrate the idea IoT is: “Consider a smart refrigerator that keeps track of the availability and expiry date of food items and autonomously places an order to the closest grocery store if the supply of a food item is below a given limit” (Kopetz 2011).

Although much of the rise of the IoT concept has been around objects (or things) that become part of the Internet, much richer applications can be developed when location awareness (places) is considered. Companies like Cisco, Hewlett Packard and IBM have incorporated the concept of IoT in their projects with a strong Earth location-based component. The ALERT project of Cisco Planetary Skin institute is a decision support Evaluation, Reporting and Tracking system for near real-time global land use, land cover change, and disturbance detection and analysis. It provides global coverage of land change events on the Earth, such as deforestation, offering users a number of useful tools for identifying, characterizing and responding to disturbances (Stanley 2011). Hewlett Packard’s Central Nervous System for the Earth (CeNSE) involves an intelligent network of billions of nanoscale sensors designed to feel, taste, smell, see, and hear what is going on the Earth surface. The idea is that these nanoscale sensors will quickly gather data and transmit it to powerful computing engines, which will analyze the information in real time for different applications and web services (Hartwell and Williams 2010). IMB’s Smart Planet initiative is boosting IoT driven projects for water management systems, solutions to traffic congestion problems, greener buildings, and many others (Zhu et al. 2009).

If we add into the IoT mix not only things like places, but also people and systems, IoT becomes just one piece of a larger concept, together with the Internet of people (like social networks), Internet presence sites (such as FoursquareFootnote 6 or any place that can transmit information about itself) and the Internet Information (i.e., World Wide Web systems to share information through an API or web services). This describes a concept that Gartner, Inc. called the “Internet of Everything”.

A huge variety of IoT-based applications have been created since its “birth” sometime around 2008 and 2009, however much geospatial reasoning is still needed in the IoT concept. Beyond the location of things, spatial relationships among things (spatial analysis) must be also considered.

The base technology behind DE and IoT is the Geospatial Web. The next section describes the mechanism to transfer geo data over the web, and how this technology allows data visualization, a strong theme in Gore’s vision of DE.

5 Geospatial Web: catalogs, processing and visualization

The Geospatial Web consists of location-based web technologies usually manifested on the Internet (Avraam 2010). Examples of Geospatial Web applications are Geobrowsers or Virtual Globes. One of the most important technologies to facilitate the interchange of data in the Geospatial Web are the Web Services. In general, Web Services are software systems designed to facilitate machine to machine interaction over a network (Sample et al. 2008). More specifically, Web Services are self-contained, self-describing and modular applications which can be published, located, and invoked across the Web.

A particular type of Web Services is those involving the geo data and they can be classified in Discovery, Access and Processing Services. Discovery Services—or catalog services—consent to find resources (data and services) through metadata, support download of data or link to related websites and applications, Access Services provide standardized access to geospatial information delivering “raw” geospatial data or maps and Processing Services perform spatial processing like coordinate transformation services, fusion services, overlay services, among others.

The open spatial consortium (OGC), the international organization that develops open standards for geospatial content, has created web services specifications especially for the exchange of different geospatial data typologies: from deliver a map as an image WMS (Web Mapping Service),Footnote 7 to provide the data as vector WFS (Web Feature Service)Footnote 8 or as raster WCS (Web Coverage Service).Footnote 9 For data discovery, the Catalog Service for the Web (CSW),Footnote 10 delivers a catalog of geospatial records in XML language on the Internet. For data processing, the Web Processing Service (WPS)Footnote 11 provides client access across a network to pre-programmed calculations and/or computation models that operate on spatially referenced data. The calculation can be extremely simple or highly complex, with any number of data inputs and outputs; the idea behind WPS is to standardize the way in which geospatial process is invoked. The standard has very generic nature, meaning that it does not identify a specific process, so each process must be develop and standardized for interoperability purposes (Castronova et al. 2013).

On the Geospatial Web it is also possible to access different servers to obtain one unified map, and it is called Map Mashup. In general, a mashup could be defined as a web site that combines content data from more than one source to create a new user experience. It can be also define as a “the mechanism for integrating and displaying information from multiple sources” (Goodchild 2008). The name “mashup” comes from the pop music term, which refers to two or more songs combined into a new song (Li and Gong 2008).

Once data have been retrieved by geospatial web services, they need to be visualized, either in 3D or 2D. Three-dimensional web visualization of geospatial data is being a common practice for more than a decade (Manferdini and Remondino 2010; Brovelli et al. 2014; Zhu et al. 2003). Some of these geobrowsers present a multidimensional (2D, 3D and even 4D) web mapping system, or a synchronized multiframe system that provides users with a 3D virtual globe view in one frame alongside another frame with a 2D view (see Fig. 2). They are also multitemporal, giving the opportunity to navigate historical and cultural maps and digital cities over time. All this Geospatial Web visualization environment provides professional (and non-professional) users with a rich experience to browse and visualize Digital Earth maps in space and time.

Fig. 2
figure 2

Examples of multidimensional 3D visualization

6 Conclusions

The present paper gives a concise overview of the state of art and of the advancement of the DE and its related concepts. The DE and the Internet are all scientific and technological areas that will be the answer to the challenges of developing observation systems, improving prediction models and the development of technological, political and social solutions together with the effective management of Big Geo Data.

The Digital Earth applications and concepts have resolved some problems of traditional cartography, but poses new challenges such as Big Data processing. We are producing more data than can be stored, so a big question arises: How do we extract relevant patterns from data and reduce the volume of information we need to preserve?

The representation of the Earth has always been one of the main purposes of cartography; therefore, cartographers must be able to respond to the new challenges of the DE. The geomatics community has the duty to promote a new cartography, largely to be reinvented, and that would put us at the center of processes of knowledge and management of the Earth. Map makers in the past helped discovering new “worlds”, now the challenge is to rediscover our common world with new eyes of environmental, social, economic equity, sustainability and participation.