Introduction

Geographic information has been in use from centuries. Paper maps were the important source of information. Nowadays, data is available in the form of spatial mash-up that is created from various data sources (Zhou et al. 2014). From past few decades, noticeable works have been carried out in the field of Geographic Information System (GIS) and the related topics. GIS is designed to deal with large amount of geographic data. GIS has enhanced the spatial data by associating computer and network technologies with it. It enabled the end user to perform the tasks that range from viewing of data at different resolutions to find out the shortest path in transportation (Zhan 1997), to terrain analysis in environmental application (Neteler et al. 2012), to find out the best location in business (Eddie et al. 2007), and to identify the crime hotspots for security agencies (Chainey et al. 2008).

GIS was monolithic and very costly in its early stage. It required specialized skills and heavy investment in setup. It was mainly used in organizations by few well-trained users. Initially, GIS was developed using limited hardware capabilities and support (Goodchild 1992). The integration of GIS with the web has resulted in making GIS available to the common public in an easy and efficient way (Green and Bossomaier 2004). The increasing popularity and decreasing cost of laptops, smart phones and other mobile devices created a major role of Internet in our day-to-day activities. This in turn resulted in wide spread use of webGIS. There have been several initiatives in this field by the developers and user communities that subsequent increased the popularity of webGIS. Several webGIS architectures have been proposed to meet the evolving needs.

The aim of the present paper is to provide an overview of webGIS and its architectures. The GIS is first discussed and is followed by the evolution of the Internet and World Wide Web (WWW or web). The evolution of webGIS and identification of different milestones of this process are then explained. This is followed by discussion on various architectures, namely, client server architecture, SOA, and cloud computing. A comparison of these architectures is also carried out that will help in the selection of webGIS architecture for a problem at hand.

GIS

It is estimated that 80% of the data contains spatial aspect (Klinkenberg 2003). Paper maps are an important source of spatial information since centuries. In the past, there were several incidents that used maps for problem solving. Like in 1854, on the basis of maps, Dr. John Snow identified the hand pump that was the cause of cholera epidemic in London city (Koch 2004). The advent of computers had an impact on different fields including GIS. Nowadays, captured data are stored in GIS that provide not only a flexible retrieval and attractive presentation of the data but also the analysis and manipulation capabilities on this data. The first remarkable work in the field of GIS was the Canadian GIS, which was carried out by Roger Tomlinson for the Canada Land Inventory department that made him known as father of GIS (Geller 2007).

The GIS has been defined in the literature in various ways by researchers, some of these definitions are reproduced here. According to Burrough (1986), “GIS is a powerful set of tools for collecting, storing, retrieving at will, transforming and displaying spatial data from the real world.” As per Cowen (1988), “GIS is a decision support system involving the integration of spatially referenced data in a problem solving environment.” Longley et al. (2005) defined GIS as “a container of maps in digital form for the general public while as a tool for revealing hidden content in geographic information for scientists and investigators.” Rigaux et al. (2001) defined GIS as “a tool that stores geographic data, retrieves and combines this data to create new representations of geographic space, provides tools for spatial analysis and performs simulations to help expert users organize their work in many areas.” According to Heywood (2010), the GIS is used to answer the generic questions related to location, geographic pattern, effect of spatial conditions, trend, and spatial implications of an action. Bolstad (2012) defined GIS as “a computer based system to aid in the collection, maintenance, storage, analysis, output and distribution of spatial data and information.” In the beginning of 1990s, the web came into picture that had stupendous impact in the computing world. GIS professionals made attempts to adapt GIS in the web environment which gave birth to the webGIS.

Internet and World Wide Web

The need of distributed and networked systems was realized by the USA during 1950s when there was the development of intercontinental ballistic missiles (Denning 1989). In 1958, the Department of Defense (DoD) at USA established the Advanced Research Projects Agency (ARPA). It started the ARPANET project to build a distributed network of computers. In December 1969, four test sites of ARPA became operational located at the University of California Los Angeles (UCLA), Stanford Research Institute (SRI), University of California (Santa Barbara), and University of Utah (Lukasik 2011). It then grew rapidly connecting different networks. In 1973, it got connected to Europe and became an intercontinental network. During 1970s, protocols for transferring data between different types of computer networks were developed which later helped in the creation of a worldwide network (Davis and O’Sullivan 1998). In 1980s, the National Science Foundation (NSF) joined ARPANET for supercomputer users but it suffered from performance issues. Therefore, it built a high-capacity network known as NSFNet. Later, the combination of ARPANET, NSFNet, and other networks become popular as the Internet.

Several services were provided on the Internet like telnet for remote login, FTP for file transfer, and usenets for newsgroup. The Gopher was the popular one among these services which was invented at the University of Minnesota in 1991. It was used to retrieve data placed on servers connected to the Internet and served as a gateway to other Internet services (Frana 2004). In February 1993, when it began to charge a license fee, people start looking for alternative and web become their choice. In 1992, the web was invented by Tim Berners-Lee along with the HTML (Hypertext Markup Language), HTTP (Hypertext Transfer Protocol), and URL (Uniform Resource Locator) at CERN (European Organization for Nuclear Research). The combination of HTML and HTTP in the graphical browser provided many features like forms, refresh meta tags, scripting, and applet objects. This made the web superior to Gopher and other services (Khare 1999).

The WWW or web is a collection of hypertext documents that are linked to other documents located on computers anywhere in the world (Ingram 1995). These documents are further connected to other documents or media making the “web” of interconnected documents. A set of protocols is used by WWW for exchanging files and other documents that are located on other computers which are identified by the unique IP address (Stubkjaer 1997). Many organizations are involved in the development and promotion of open standards for web, such as the World Wide Web Consortium (W3C), International Organization for Standardization/Technical Committee 211 (ISO/TC 211), Open Geospatial Consortium (OGC), and Organization for the Advancement of Structured Information Standards (OASIS). The W3C provides the information technology baseline standards, ISO/TC 211 develops abstract but detailed baseline standards, OGC focuses on the implementation-oriented standards including web mapping and Geography Markup Language (GML) that fit into the abstract frame set by ISO/TC211 (Kresse 2004), and OASIS produces web services standards for security and e-business.

Web applications are used to perform major tasks on the web. These applications run on the web server, and many times, results depend upon the user input that is given by the web browser (Conallen 1999). As compared to desktop applications, web applications are more open since these work on wide variety of operating environment and need to support multiple clients (Li et al. 2014). These can be static or dynamic in nature. Static web applications do not change with user input. In dynamic web applications, the user input and user interaction affect the web page.

The web has an impact in almost every field from business (Chu et al. 2007; Hang 2009) to customer relationship management (Lei and Yang 2010) to governance (Ning et al. 2009; Awoleye et al. 2014) to emergency management (Li et al. 2006; Youn et al. 2008; Demir and Krajewski 2013) to education (Lin et al. 2008; Allison et al. 2012) to environment (Walker and Chapra 2014) to the applications related to public services like transportation (Karacapilidis et al. 2006), library (Zhao 2010; Wenyun et al. 2010), health (Tamura et al. 2014), and counseling (Zhao and Tian 2010). The growth of the web had a major impact on GIS which resulted in the development of webGIS. The webGIS started with simple web mapping websites and then reached many milestones.

Evolution of the webGIS

The web has a phenomenal impact in the computing world. The webGIS evolved with the growth of the web. The fast development of web made valuable data of GIS enabled environment accessible to the public through the webGIS (Yang et al. 2005). It adapted GIS in the web environment. It added new dimensions in the growth of GIS, as large and complex GIS applications become available to common public through the web. This integration of GIS with the Internet technology has revolutionary effects like interactive access to geospatial data, real-time data integration and transmission, and access to platform-independent GIS analysis tools (Su et al. 2000; Karnatak et al. 2012).

Major events

The beginning was made with the development of static web mapping sites. These sites provide the visualization functionality and devoid of GIS analysis capabilities. These then grew into interactive web mapping sites. As the technology matured, it evolved to distributed web and/or GIS services (Peng and Tsou 2003). Hardie (1998) and Li (2008) discussed various milestones in the evolution of webGIS.

The first initiative of webGIS was in 1993 when Putz (1994) developed an interactive web map site in Xerox PARC (Palo Alto Research Center). In 1994, first online atlas was released by Canada known as the Canadian National Atlas Information Service. It generated the maps as per user requirements by rendering the overlaid map of user selected datasets. In the same year, the National Spatial Data Infrastructure (NSDI) was created by the executive order of the US president which was followed by the establishment of the National Geospatial Data Clearinghouse (Crompvoets et al. 2004). Three path-breaking works took place in 1995, and it was a big year for webGIS. In this year, Alexandria Digital Library (Smith and Frew 1995), TIGER (Topologically Integrated Geographic Encoding and Referencing), and GRASSLinks were released. In 1996, the MapQuest released mapping services online with maps and routing services. It offered maps at different level of details ranging up to ten levels of detail, but the maps except that of USA were incomplete. In the same year, the MapObjects was released by the Esri, one of the most important vendors of GIS. The MapObjects is a collection of embeddable mapping and GIS components (Li et al. 2004).

In 1999, the Web 2.0 was released which bring the revolutionary changes in the use of web and its related disciplines. The webGIS also got greatly impacted from this development. This resulted in many new applications like Google Maps, Google Earth, and Microsoft Bing Map. In 2000, Refractions Research released PostGIS, an open source support for spatial objects in PostgreSQL. In 2005, AJAX (Asynchronous JavaScript and XML) came into picture which is a web development technique. The Google Maps is a classic example of application developed using AJAX (Fu and Sun 2010). In the same year, beta versions of Yahoo Maps and Google Maps were released. A major event in the 3D webGIS also occurred in this year in the form of Google Earth (Yu and Gong 2012). These milestones in webGIS evolution clearly indicate the growing popularity and need of this field.

Emerging data sources

The invention of different devices and techniques like GPS, smart phones, and social networking sites paved the way for new data sources. The amateurs become the providers of spatial data which is known as Volunteered Geographic Information (VGI), the term coined by Michael F. Goodchild (Goodchild 2007a). In this, the citizens act as sensors and collect the information from the site (Goodchild 2007b). The basic technologies required in the creation of VGI are global navigation satellite systems such as GPS or GPS enabled mobile devices, Web 2.0, and broadband connectivity (Heipke 2010). Since the data can be collected by any person, therefore, its quality must be checked. Some of the quality assurance methods are discussed in various papers (Goodchild and Li 2012; Bordogna et al. 2014; Yung et al. 2014). The popular examples of VGI websites include OpenStreetMap (Haklay and Weber 2008), Flickr (Kennedy et al. 2007), and Wikimapia.

Initiatives of OGC

By the year 2000, producer-operated map servers were growing very fast but they operate in isolation (Morris 2006). The services provided by these servers were not integrated. Therefore, map services need a revolution which was brought by various initiatives of the OGC. The OGC is the leading body devoted to the growth of open standards and interoperability of geographic information. Geospatial web services are part of web services. Geospatial services are slightly different from common services due to inherent characteristics of geospatial data on which they operate which are diverse, huge, and complex (Granell et al. 2010). These are provided by the OGC as Web Map Service (WMS) for display of maps in digital image format, Web Feature Service (WFS) for access of geospatial features encoded in GML, and Web Coverage Service (WCS) for access to geospatial data describing space-varying phenomena such as satellite imagery, digital elevation models, or triangulated irregular networks (Lupp 2008).

As per OGC (de la Beaujardiere 2006), the WMS produces geographic information as a digital image file, commonly known as map, which are generally rendered on the screen in image format such as svg, png, gif, or jpeg. The WFS allows a client to retrieve geospatial data encoded in GML from multiple web feature services (Vretanos 2002). The GML is an XML grammar written in XML schema for modeling, transport, and storage of geographic information including both the spatial and the non-spatial properties of geographic features (Cox et al. 2005). WFS-based case studies can be found in research papers published by Peng and Zhang (2004), Peng (2005), and Zhang and Li (2005).

Impact of open source software

The open source movement has a significant impact in the reach of webGIS to the users of different platforms. The FOSS (Free and Open Source Software) has made the software available to all free of cost. According to Free Software Foundation, free software grants four freedoms (Stallman 2002). These freedoms are to run the program for any purpose, to study how the program works and adapt it to needs, to redistribute copies, and the fourth one is to improve the program including the release of improvements to the public. The second and fourth freedom require that source code must be delivered with the software. Steiniger and Bocher (2009) used four indicators to measure the popularity of FOSS, namely, number of projects started in a couple of years, financial support by the government in the development of FOSS GIS projects, download rate of the software, and number of use cases. There are several options available for FOSS. Many of them and associated case studies are discussed in research papers by Caldeweyher et al. (2006), Ramsey (2007), Jing et al. (2008), Xia and Xie (2009), Pollino et al. (2012), Steiniger and Hunter (2013), Teodoro and Duarte (2013), and Agrawal and Gupta (2014).

WebGIS architecture

In general, webGIS applications have web browser as a client for sending the request and a web server for responding to the request. The non-spatial web applications usually contain only web server, but in case of webGIS, there is an additional server called data or map server for spatial data. This server handles the geospatial data, provides geospatial data compatible services like WMS and WFS, and is able to perform GIS functionalities like editing, routing, and object tracking. The client can make the request to the server located at any place using middleware technologies like Remote Procedure Calls (RPC) or Open Database Connectivity (ODBC) (Tsou and Buttenfield 2002). The webGIS architecture grows from multi-tier approach to plug-and-play to SOA (Yang et al. 2010) to cloud computing (Yang et al. 2011).

Client server architecture

This architecture follows a traditional network architecture. It has different approaches namely thin client, thick client, and hybrid architecture which are discussed in subsequent subsections.

Thin client architecture

This architecture, as shown in Fig. 1, has minimal resource requirement on the client side. Most of the processing is done at the server side. When a client makes a request, the server generates a response which in its simplest form may be a map generated using the database. This spatial response is in a web-friendly file format so that it can be rendered by the web browser. The client cannot directly call the GIS server; therefore, it requires some interpreter like the Common Gateway Interface (CGI) or some other gateway script (Peng 1999). Another option for the server side technology is the servlet which is a Java program. The servlet is more efficient than CGI, as it does not require to start, load, and stop for each request and can handle multiple clients requests (Mi et al. 2004). Some other technologies are the Application Programming Interface (API), Active Server Pages (ASP), and Java Server Pages (JSP). An early example of this architecture is a web mapping site developed at Xerox PARC (Putz 1994).

Fig. 1
figure 1

Thin client architecture (adapted from Alesheikh et al. (2002))

This architecture provides several advantages, as there is no responsibility on the client side. It just needs resources to send the HTTP request to the server and displays the processed result sent by the server. These results are mainly in some image format like jpeg, gif, or png. Thus, raster data can easily be displayed but it cannot render vector data (Alesheikh et al. 2002). The centralized control is made by the server. This makes the data updates and maintenance easier. But it cannot deal with the individual client needs and requirements. It is a low-cost solution, as there is not much capital investment at the client side. As a result, it brings full load on the server that makes the response time high due to the bandwidth and other issues. The main disadvantage of this architecture is the limited functionality at the client side. This results in a new request by the client every time even for the basic map operations such as zooming and querying (Huang et al. 2010). This increases the number of interaction with the server while the capability of client is not fully utilized (Jiangfeng et al. 2009).

Thick client architecture

In this architecture, the client is more powerful as the browser’s capabilities are augmented by plug-ins, applets, or ActiveX, as shown in Fig. 2. Therefore, the processing can be performed at the server as well as the client side. Plug-ins is the executable that operates on a specific data type. It must be installed in advance on the client machine. It provides specialist viewing and manipulation functions on its native data type (Abel et al. 1998). Another option is the applet which is Java executable and does not require pre installation. The applet can make the most of visualization processing at the client side. Raw data provided from the server can be processed at the client side by various operations, e.g., it can be modified by filter process, filtered data can be mapped to some geometric representation, and this can be rendered using shading, lightening (Huang 2003). One such model for environmental application is discussed by Huang and Worboys (2001). The ActiveX can be used only in Microsoft Internet Explorer; therefore, it is not recommended to use in the applications that are browsed by different web browser (Huang and Lin 2002).

Fig. 2
figure 2

Thick client architecture (adapted from Alesheikh et al. (2002))

This architecture has an advantage of running the client side system even when there is low or no connectivity with the server, because raw data is provided by the server that can be used for different purposes. This also implies that the client does not require to request the server for small actions. In turn, it lessens the load on the server, as some of the processing is performed at the client side. Different technologies used at the client and server side approach are summarized in the Table 1 on the basis of papers by Sorokine and Merzliakova (1998), Peng (2001), and Chang and Park (2006).

Table 1 Comparison between different client side technologies

Comparison between thin and thick client architectures

As discussed in the previous sections, both the architectures have some advantages and disadvantages. On this basis, a comparison in Table 2 has been prepared.

Table 2 Comparison between thin and thick client architectures

Hybrid architecture

This is the combination of thin and thick client architectures. Some of the tasks that are related to data manipulation are performed at the server side while other tasks that are related to user interaction are dealt at the client side. It uses the combination of client and server side technologies. Initially, this architecture was based on Applet-CGI combination, i.e., the applet is used at the client side while CGI at the server side. The applet-servlet evolved as more efficient option. Lin et al. (1999) discussed this type of architectures for 3D visualization.

Service-oriented architecture

In client server architecture performance, scalability, maintenance, and other quality parameters are major architectural issues (Duchessi and Smith 1998). In the meantime, small-scale web services hit the market that catered a specific need of the client. According to Harrison and Reichardt (2001), web services are the new model for Internet-based applications which are not like the complex and tightly bundled software packages. The geospatial web services are different from the traditional web services due to the wide variety of spatial data models, data formats, data semantics, and relationships (Vescoukis et al. 2012). This creates interoperability issues that are solved by the OGC in the form of interface specification of several services and languages like WMS, WFS, and GML.

These web services work as a stand-alone solution where any type of integration is quite difficult. SOA emerged as a solution to this problem of disparate systems and data by integrating the applications and information of different nature (James 2010). SOA provides an architectural style that is concerned about the temporary relationship between service provider and consumer, the runtime issues of service provider, and the expectations of the service consumer (Gu and Vliet 2009). This has been defined by the OASIS reference model as a paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains (MacKenzie et al. 2006). SOA has been discussed in detail for webGIS by various researchers like Hundling and Weske (2003), Lu (2005a), Zhang et al. (2008), and Sha and Xie (2010). In the literature, the use of SOA can be found in the system development of wide range of diverse fields from public health (Lu 2005b) to transportation (Lu 2006a, b) to natural resource (Huang et al. 2011), etc.

In SOA, web services can play three different roles which are provider, broker (or catalog), and requestor (or user). Each service has implementation and interface. Implementation gives internal specification of the service while interface denotes the externally visible behavior of the service through which it interacts with other services. Services have two type of interface namely buy and sell. The buy interface specifies the services required by it and plays the requestor role. The sell interface specifies the services provided by it and plays the producer role (Van Der Aalst et al. 2007). The provider publishes its services at the broker where the customer discovers the required services. By invoking the bind function, the customer then becomes a consumer for the provider as shown in Fig. 3.

Fig. 3
figure 3

GIS service-oriented architecture (adapted from Vaccari et al. (2009))

The provider registers its service at the Universal Discovery and Description Interface (UDDI) registry. Its interface is written in the Web Service Description Language (WSDL) that describes the functionalities of web services, signature of operation, and data type (Lapadula et al. 2011). Services communicate with each other using Simple Object Application Protocol (SOAP) that provides a mechanism for exchange of structured messages (Box et al. 2000). WSDL provides the technical and interface details of the services and is used in the publication of a service in the registry. WSDL contains one or more XML Schema Definitions (XSD) that describe the metadata. The combination of WSDL and XSD defines the web service interface. WSDL provides the operations that represent the functionality of the service and the data used in these operations. On the other hand, XSD gives the type and the structure of the message. Thus, XSD provides the message format that is not given by the WSDL. In WSDL, a simple interaction between services usually follows RPC encoding in which the functionality of each web service is described as an operation along with the list of parameters. The UDDI is a registry that contains the web service information like contract, policy, and version. It helps in the discovery of WSDL. It is accessed by SOAP. The Web Catalog Service (CS-W) of OGC is usually used as repository that contains documentation, logging, etc. The registry and repository help in the discovery of individual services which are otherwise difficult to locate. SOA can be more clear by static and dynamic Unified Modeling Language (UML) modeling which is discussed in paper by Baresi et al. (2003).

Any system that is built according to SOA follows the following principles (Papazoglou and vanden Heuvel 2007; James 2010):

Loose coupling

It separates web services from each other and an underlying interface. It insulates consumer and producer from each other so that any change in one does not affect the other. To provide loose coupling, a message exchange technique is used between the consumer and the provider for communication that avoids any direct technical connection between them.

Interoperability

It implies that the service providers and consumers can be built on different platform using different technologies. In SOA, XML is generally used in message encoding which is supported by most of the platforms.

Reusability

It means to use something again. In context of SOA, it is compiled using existing web services. Many times, existing web services do not fulfill the requirements completely. In such cases, web services are needed to be extended which is a difficult task.

Discoverability

To reuse any service, it is necessary to find it. Therefore, discoverability is an important principle of SOA. A service registry which is just like a catalog or telephone directory is used to find out the availability of service. These registries are usually as per UDDI specifications.

Governance

It provides a set of rules that provides the measure of the degree up to which the system adheres to the previously mentioned principles. In case of any non-compliance to the principles, it will provide the remedial solutions.

The SOA has several benefits over previous architectures, as it provides an open and interoperable environment. It promotes geospatial data sharing from disparate system in a cost-efficient manner (Zhang et al. 2007). According to Christensen et al. (2007), SOA is more efficient than standard GIS applications and provides real-time information more easily. The standard GIS applications provide a whole range of functionalities among which only a subset is used and the remaining gets wasted. In contrast to this, SOA provides only those services that are required by the user. As SOA provides data for each processing activity, therefore, its results are based on updated data repositories.

The business analyst begins the system development by taking care of requirements and business process model. The software architect then designs services to fulfill requirements and service model. The developer develops and tests the atomic services. The assembler then assembles the services for an application as per service model. Finally, the application is deployed on the platform by the deployer (Satoh et al. 2008). Model-driven and role-based security is usually used to mitigate the security challenges (Phan 2007).

Spatial cloud computing

The computing has evolved from mainframe computing to PC computing to network computing to Internet computing to grid computing and then to cloud computing (Kim 2009; Naghavi 2012). The US National Institute of Standards and Technology (NIST) defines cloud computing as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction” (Mell and Grance 2010). It creates an open environment in which assets are shared (Buyya et al. 2008). Several cloud computing definitions are listed in the paper by Vaquero et al. (2009).

Although cloud computing is a new technology, its concept was given in the early 1960s by John McCarthy who suggested that computing resources might one day be sold as a utility like water or electricity (Dikaiakos et al. 2009). The term “cloud” has been in use in 1990s, but it become popular when Eric Schmidt, Google CEO, used it in a conference in 2006 (Sultan 2014). In 1997, Ramnath Chellappa gave the first academic definition of the term cloud computing (Madhavaiah et al. 2012). In 1999, Salesforce.com brings the concept of providing enterprise application through a website for the first time (Sumter 2010). Some of the major cloud computing products are Amazon EC2, Microsoft Windows Azure, and Google App Engine (Zhang et al. 2010). In open source cloud, the first major initiative was the development of Eucalyptus in 2008 while other products are OpenStack, OpenNebula, etc. (Yadav 2013). In 2010, a cloud 2.0 model emerged that provides value-based services with high security and availability (Durkee 2010). Cloud computing is a new evolving technology whose foundation lies on the concepts like virtualization, SOA, and pay as per use. Just like SOA, it focuses on web services and provides loose coupling among system components but it supports vertical services over horizontal services and infrastructure over application unlike SOA (Jin 2010).

In cloud computing, the facilities, i.e., software, platform, and infrastructure, are provided over the Internet. The client can make a request to the cloud for Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). In SaaS, the software application runs on the cloud which can be accessed by multiple users or clients. The user has to log on the service provider’s website to use the software product instead of installing it on the local computer. The PaaS gives a platform that provides operating system, middleware, system software, and the development environment as a service. It helps enterprise developers to quickly write and test customer-related web applications. The IaaS provides the storage and computing facilities. Servers, switches, routers, storage systems, and other devices are rented to clients so that they can deploy and run their application (Venters and Whitley 2012).

Based upon the previous discussion, a comparison of all three delivery models is shown in Table 3.

Table 3 Comparison between delivery models of cloud computing

Figure 4 shows the cloud computing architecture which is integrated after Behzad et al. (2011) and Tang and Feng (2017). A client (thick, thin, or mobile) can make a request related to projection, visualization or geoprocessing, etc. A request from the user contained in the virtual machine requires processing power on host component or a physical computing node in the cloud. In cloud, there is an extra layer of virtualization which affects the execution and hosting of cloud-based application service. Cloud service provider provides the requested resources to the client using various cloud functionalities like load balancing, CPU allocation, memory allocation, storage allocation, and virtual machine provisioning. In a high-performance computing layer, GPU clusters provide capabilities to perform visualization and analysis of spatial data. Virtualization helps in resource sharing, thereby, removing the limitations of physical machine and achieving greater flexibility.

Fig. 4
figure 4

Spatial cloud computing architecture adapted after modification from Behzad et al. (2011) and Tang and Feng (2017)

In cloud, there is a virtual pool of resources which is provided to the users through the Internet. These resources are deployed, allocated, or reallocated and monitored for usage dynamically. It is about moving the data, services, and computation to the third party in a location independent way. The data uploaded on the cloud can be accessed more easily and ubiquitously by the user with the help of the Internet. It provides greater flexibility and availability of resources at a lower cost. Users can use computing power just like the way they use water, electricity, gas, or telephone and pay according to the usage. Baars et al. (2014) discussed some of the chargeback models in cloud computing.

Behzad et al. (2011) used cloud computing in the simulation of groundwater dynamics so that resources can be available efficiently on demand. Huang et al. (2013) used the Amazon EC2 cloud in the forecasting of dust storms and compared the results with the HPC cluster and concurrent computing methods. Sun (2013) depicted the migration from client server architecture developed for environment decision-making to cloud computing using Google Fusion Tables. In the literature, GIS applications in diverse fields that are built on cloud computing architecture can be found like massive data processing (Cui et al. 2010), geoprocessing (Gong et al. 2010; Kim and Tsou 2013), flood monitoring (Kussul et al. 2012), marine monitoring (Fustes et al. 2014), and disaster management (Wan et al. 2014).

Comparison of architectures

Based upon previously discussed architectures, advantages and challenges of these architectures are compiled in Table 4. For any application, suitable architecture can be selected as per the user requirements.

Table 4 Comparison between architectures

Concluding remarks

In the present paper, different webGIS architectures, namely, client server, SOA, and cloud computing, have been reviewed along with the evolution of the GIS, Internet, and webGIS. The advantages and issues associated with these architectures are also examined. It may be observed from the study that the desired architecture for any webGIS depends on the considered problem at hand. If there is a single repository of resources that caters all the user needs and the problem works in a traditional way, then client server architecture is suitable. Further, for the client with minimum resources, thin client architecture is used whereas thick client architecture is applied when client is powerful and has resources that can be used in processing at the client end.

If the functions are available like web services, then SOA-based webGIS architecture is a good option. If a software, platform, or infrastructure is not available at the user end, then cloud computing is preferred. Cloud computing reduces the cost of use and increases the harnessing of underutilized resources.

There is a continuous change in the webGIS architecture in order to adapt the changing user needs and technological advancements. The focus nowadays is shifting towards peer to peer computing. With the evolution of Internet of Things (IoT), there is a need to perform certain level of computing at the edge of the network instead of transferring the entire computing capabilities towards cloud. Edge computing and fog computing are the emerging areas of research of the future.