Keywords

7.1 Multidimensional Data Organization of Ocean Disasters

7.1.1 Thematic Structure of an Ocean Disaster Spatio-temporal Data Warehouse

There are many common factors related to ocean disasters, such as water temperature, salinity, density, tides, tidal waves, currents and meteorological elements, water quality, sea ice, typhoons, and storm surge. Ocean disaster data are typically characterized by multidimensionality. The organization of ocean disaster multidimensional data is based on the concept of a data warehouse.

In research related to the oceans (Shen et al. 2007.), not every factor is isolated. In the process of solving problems, we must not only consider the characteristics of a certain single factor or phenomenon, but we must also analyze the relationships and effects of different factors.

Figure 7.1 shows the thematic structure of the ocean disaster spatio-temporal data warehouse.

Fig. 7.1
figure 1

Thematic structure of the ocean disaster spatio-temporal data warehouse

A thematic generalization is proposed for the application of data integration, which is the advanced application of the data warehouse. According to the customer’s specific demands, thematic generalization joins various tables by a base factor. By sharing and integrating these tables, we could create a multidimensional data cubic. The cubic provides many topics aimed at different areas of ocean research and forms a flexible and efficient ocean disaster thematic structure.

7.1.2 Modeling the Dimensions of the Ocean Disaster in Spatio-temporal Data Warehouse

Dimension refers to the organized and classified hierarchy structure, and it describes the table of the factor in a data warehouse (Su et al. 2006a, b, c). Dimension is the basic component of the multidimensional data cubic, and it generally describes a member’s similar set on which the user can base an analysis. The dimension of the cubic often has many attributes that can be organized into a hierarchical structure according to their details. The hierarchical structure includes the member set in the dimension and the relative position between these sets. For example, the time dimension can be divided into day, month, season, and year, where the “day” is the most detailed level. Dimension hierarchies can clearly reflect drill-down and roll-up operations. Drill-down means to observe data along some dimension more specifically, and roll-up means to observe data along some dimension more systematically. Dimension hierarchies can be divided into generalized relationships, aggregation relationships and member relationships. A dimension’s value is referred to as one of its members. If the dimension is multi-level, the dimension member is one value in the combination of different dimensional levels. For example, the time dimension has a month/day/year level. A dimension member is combined the value obtained from each level.

The ocean disaster spatio-temporal data warehouse adopts a metadata-driven method of sharing the metadata dimension to create the database. The Shared dimension refers to the dimension of the scope created in the database warehouse. It can be used by the data warehouse in an arbitrary cube. By creating the Shared dimensions and by being used in more than one cube, much time can be saved. Standardization of Shared dimensions, such as the standardization of time and geographical space Shared dimensions, will ensure that different data sets are used in a similar manner. This integration of data sharing is very important. Therefore, in the ocean disasters space-time dimension data warehouse, modeling of dimension tables can limit the maximum shared by the fact table. Regarding time and space in the data warehouse, the ocean disaster time and spatial dimensions have an obvious hierarchy, but there are also other dimensions that have no obvious hierarchy or for which the operations of hierarchy (roll-up and drill-down) do not make sense (Thomas, 2003). For these, a description of the theme data is needed, such as data type, data structure, and spatial reference information. This section will focus on the logical structure of the shared dimensions of ocean disaster data in the spatio-temporal data warehouse.

7.1.2.1 Time Dimension

The time data related to ocean disasters include observation time, analysis time, statistics time and forecast time. There are many levels of precision of the time. For example, many statistics record data in a period of 10 days or of a year or month. Data analysis and observations usually record time in hours. Valid ocean disaster data use hours to days, month, season and year as a time hierarchy, which will ensure the integrity of the data granularity level. The time dimension table structure is shown in Fig. 7.2.

Fig. 7.2
figure 2

The time dimension table structure of the ocean disaster spatio-temporal data warehouse

In the data warehouse, most of the multidimensional model needs to build a relationship between every fact and the lowest level of a dimension value for each dimension. In the data warehouse, for time and space related to the ocean disasters, regardless of the particle size, the fact table must have the same level of measurement as the rules. If those multi-level granularity facts are mixed or are stored as a mixture of multiple-granularity data, the user will feel confused and it will be easy to make errors in data analysis and applications. Thus, the time dimension table contains interrelated chronology tables, such as season, month, 10-day, day and hour tables. Each time level can be aggregated into any upward level (not drawing connecting lines for clarity above). Each time a new level of granularity of data at a time is attained, the fact table produces a corresponding level of granularity. The meanings of the fact table fields are shown in Table 7.1. Each record in the fact table also corresponds to the time, life cycle and user-customized time (according to the need to add), which is collectively known as time control.

Table 7.1 Field descriptions of the time dimension table in the ocean disaster spatio-temporal data warehouse

7.1.2.2 Space Dimension

It is observed through the space of knowledge in the field of oceans that the sea is a continuous 4D field. However, in the computer world, continuous space is discretized into latitude and longitude and depth of the point (X, Y, Z) and then into the point values of the record, such as station, line, navigation data, remote sensing information product, and 3D data. In the ocean disasters data warehouse, time and space data with a few points, such as station, line, walk navigation data (and the point, line class data), still adopt the method of recording a scatter of latitude and longitude. Three-dimensional multi-layer data, other than the analysis of remote sensing information products, includes adopted names of bedding face products, coordinates, and the scope and the depth of expressions to be recorded. The content of the spatial dimension table is derived from the space of metadata information and browsed diagrams. The space dimension table structure is shown in Fig. 7.3.

Fig. 7.3
figure 3

The structure of a space dimension table in the ocean disaster spatio-temporal data warehouse

In the time and space data of the ocean disaster data warehouse, the hierarchy of the spatial area dimension table was identified through the data grid resolution. Regarding the spatial dimension, the roll-up and drill-down operations include the aggregation and further subdivision of the spatial data grid resolution.

7.1.2.3 Dimensions of the Source Project, Contact Information, and Investigation Information

The data source and the contact information of the project are important. They contain project entities from the source of the metadata and the data source. The investigation information dimension records the data observation instruments, platforms and the investigation information dimension table from the metadata. The unit table records all of the data related to the unit information. The dimension table is shared for the source project dimension table, the contact information dimension table and the survey information dimension table.

7.1.2.4 Category, Data Format, and the Spatial Reference Dimensions

The element category dimension records the classification of data information. The elements category is the primary key of the dimension table, and it is also the identity of the dimension hierarchy data category. Data on the category dimension can start roll-up or drill down operations according to the level of the category code to obtain the categories of data or information on the next level of classification information. The element category dimension is derived from the data classification of the metadata entity. Because of the different data sources and access, data on the same topic in the fact table may have a variety of data formats. The data structure dimension records the format of the data and uses the information. It is easy for a program to automatically identify and handle this information. The data structure dimension is derived from the data format of a metadata entity. The spatial reference dimension records the data of spatial reference information and reference entities from the metadata of spatial data. The data type, data structure and spatial reference dimensions are shown in Fig. 7.4.

Fig. 7.4
figure 4

The structures of the time element category, the data format and the spatial reference table in the ocean disaster spatio-temporal data warehouse

7.1.2.5 Warehouse Model of the Ocean Disaster Multi-dimensional Spatio-temporal Data

The ultimate goal of the design of the data warehouse model is to form the multidimensional data structure of joining the fact table with the dimension table, and to provide the multidimensional data for OLAP analysis. Considering the time and space complexity of the data, we adopt star and snowflake structures to organize the ocean disaster spatio-temporal data warehouse, and we adopt the star schema structure to organize the comprehensive analysis of multidimensional data. After the definition of data standards (such as data types, constraint condition and index), the definition of primary keys, foreign keys, properties and so on, we translate the model structure into the physical structure of the warehouse data. In the process of physical implementation, when the dimension hierarchies of different fact tables of some dimension of size are consistent, we can make the dimension table pointer point to the Shared dimension.

7.1.2.6 The Measurement of the Fact Table

A fact is the intersection of each dimension. It records the measurement of a particular subject. In the ocean disasters spatio-temporal data warehouse fact table, its value could be the temperature, salinity, flow velocity or flow direction. When building the model of basic multidimensional analysis theme data, the columns of the fact table are all the primary key of each dimension table, except for the measured variables (e.g., hour ID, area ID, point ID, contact information ID, item ID, ID, survey information data category code, data format ID, spatial reference identifier). These columns are the foreign keys of the fact table and cannot be empty. The benefit of this structure is that the fact table could join the dimension table through a foreign key of a few bytes, and much storage space could be saved. This strategy is particularly effective when the theme involves multiple dimensions or if one dimension has multiple layers. An analysis theme may correspond to several tables, such as temperature, salinity, density, statistical fact (similar to the season fact table), and monthly statistical fact tables. The tidal theme corresponds to the tidal site observation fact table and the tide and bedding face forecast data table. A fact table may belong to multiple analysis themes. For example, the themes of temperature, salinity and density can be used for fishery resources analysis, analysis of red tides and various other topics.

7.1.2.7 Multidimensional Basic Analysis Theme Data Model

By joining the fact table and the dimension table, we can get the multidimensional data structure of each analysis theme. For example, with the temperature theme, we will study the multidimensional model of the research analysis theme as follows. The fact table of the temperature theme includes tables of temperatures at different levels of time for specific areas. Table 7.2 illustrates that the measurement for temperature is the value of temperature. The values are stored in the “temperature” field. In the table of area temperature, the type of the temperature field is the BFile type in ORACLE. In the record of the area temperature field data, the graphics file or connection information of the data node is used to release the data (such as WMS service address, parameter information). We obtain the data format through data format tables. We record temperature directly in the table of the observation temperature.

Table 7.2 Measurement of the ocean disaster theme in the spatio-temporal data warehouse

7.1.2.8 Modeling for Comprehensive Analysis

The comprehensive analysis theme uses multiple basic analysis data as the data source for comprehensive analysis according to user needs. The comprehensive analysis theme adopts a star schema structure to construct a multidimensional data model in the ocean disasters spatio-temporal data warehouse. The table of the comprehensive analysis theme is necessary for joining various basic analysis theme tables. A multidimensional data cube is formed by sharing the dimension table. The comprehensive analysis of a typhoon storm surge and the comprehensive analysis of a red tide environment are used as examples for studying the multidimensional data model of the comprehensive analysis theme in the ocean disaster spatio-temporal data warehouse.

A comprehensive analysis of the multidimensional structure of the typhoon storm surge theme is shown in Fig. 7.5. A foreign key in the storm surge fact table “typhoon number” joins the primary key of the fact table. The fact table for the typhoon storm surge data has a time granularity of hours, and the two datasets have a Shared schedule in the time dimension.

Fig. 7.5
figure 5

The structure of storm surge table

Red tide is a type of ocean disaster affecting many coastal countries and it is a complicated ecological anomaly. The causes of red tide are very complex. Water eutrophication is the material basis and the primary condition of red tides. The water detection results associated with red tides show that the sea water has been badly polluted and subjected to eutrophication, and that nitrogen and phosphorus nutrient substances greatly exceed regulatory limits. Second, some organic substances would also prompt the rapid proliferation of red tide organisms. The changes of the hydro meteorological and physical and chemical factors of sea water are the important reasons for the red tides. The temperature of the water is an important environmental factor. The optimum temperature for the red tides is 20–30 °C. Scientists have found that a sudden increase in water temperature of more than 2 °C over the course of a week is the precursor for the occurrence of red tides. Chemical factors such as the salinity of the water are also contributing to biological factors, which is one of the reasons why the red tide organisms multiply. Salinity in the range of 26–37 is necessary for a red tide to occur. When salinity is in the range of 15–21.6, it is easy to form a thermocline and a halocline. The existence of the thermocline and the halocline provides conditions for the gathering of red tide organisms. Accordingly, monitoring data shows that when red tides occur, the area is always lacking in rainfall, the weather is hot, the water temperature is high, the wind is weak, and the conditions of the water environment change slowly.

When analyzing the occurrence of red tides in an environment, the time level of temperature, salinity, tide and tidal current measurements is “hour” in regional data. Combined with biological facts and chemical data (biological and chemical element analysis types through the filter element category table), the multidimensional data model of the theme associated with comprehensive analysis of the environment of red tide occurrence is formed. The procedure is shown in Fig. 7.6.

Fig. 7.6
figure 6

The multidimensional data cube of the “red tide environment comprehensive analysis theme”

In multidimensional analysis, the analysis could involve temperature, salinity, tide, tidal current, the biological and chemical fact and dimension tables, and the space between regional dimension tables.

7.2 Three-dimensional Ocean Disaster Monitoring Data Management

7.2.1 Software Structure Design

The three-dimensional ocean monitoring data management system adopts database technology, web service technology and visualization technology (Wright, et al. 2010, 2007). The system can store, query and visualize monitoring data. It mainly includes 4 modules: device management, data management, data query and task management.

  1. 1.

    The function of the device management module is mainly to manage the ground-wave radar, shore based device, buoy and subsurface buoy data. It includes accessing the device, device modification, visualizing the monitoring devices’ data, and operating conditions.

  2. 2.

    The data management module provides the function of managing monitoring data, which supports input data manually or by importing files. The manual input function means that all types of monitoring data are entered manually. The import file function means that the data are input by importing the files, including wind speed, temperature, flow velocity, flow and wave height, dissolved oxygen, and PH.

  3. 3.

    The function of the data query module is to query monitoring data according to the conditions and visualize the result in many ways. This module can realize a simple query, a joint query, or display the query results in the form of a chart or graph.

  4. 4.

    The function of the task management module is to manage and monitor the data quality through workflow technology. In particular, the module can define the task, create the workflow and execute the task, check the task, etc (Fig. 7.7).

    Fig. 7.7
    figure 7

    The structure of the management software for three-dimensional ocean monitoring data

To improve the visualization quality of the ocean monitoring data, this subject takes SKYLINE as the basic development platform. The data management system uses the C/S structure, which is based on Microsoft Visual Studio and Skyline. The skyline extensions and 3D display plugin should be installed on the client. The function of 3D ocean monitoring data management is to take a scientific and effective approach to manage and query the monitoring data. This system aims at effectively managing and querying the monitoring data of shore-based, ground wave radar, buoys and subsurface buoys. The system will play a role in the realm of ocean information management and application.

7.2.2 Software Development

The system development mainly aims at the application of the network environment, and the data access interface can be obtained by web service, as shown in Fig. 7.8.

Fig. 7.8
figure 8

The development of the data acquisition interface based on web service

7.3 Multi-dimensional Expression of the Process of Ocean Disaster Factors

7.3.1 Software Structure Design

From the perspective of technical implementation, the software of the multi-dimensional expression of ocean disaster factors from the bottom-up can be divided into the data access layer, the business logic layer and the presentation layer.

  1. 1.

    Data access layer

The data access layer is used to access the data warehouse and to realize the operation of SELECT, INSERT and UPDATE on the data tables, especially the SELECT operation. All of these operations are used for obtaining the data for the use of the business logic layer. The core of this layer is the data access class GetData. According to the data type, the class’s public interfaces include: IGetPoint (access to a single point of value, a time series, or a depth sequence value), IGetCurve (line method to get a line of a point set, or a time series line trajectory); IGetSurface (obtains a plane, or a set of time series of a plane); and IGetVolume (three-dimensional data to obtain a certain hour, or a three-dimensional data time series).

  1. 2.

    Business logic layer

The business logic layer is used to realize the visualization of the ocean environmental data. The layer, which is between the data access layer and the presentation layer, plays an essential role in the process of data exchange. The software is designed with the interface-oriented method in the hierarchical design process. The dependent relationship between the upper layer and bottom layer is a type of weak dependence. The upper layer is independent of the lower layer. Changing the upper layer’s design has no effect on the lower layer.

  1. 3.

    Presentation layer

The presentation layer is used to display data and receive input data, and it is used to provide users with an interactive operation interface. The user is the core of all processing. The process should not be determined by the application. Thus, the user interface should be designed for the user to control the application work and it should respond conveniently. The design should follow the graphical user interface (GUI) design principles, which means an intuitive and consistent interface that is transparent to users

This study adopts SKYLINE as the basic research and development platform, which integrated with visualization development based on DirectX. The study realizes many expression methods, such as 2D graphics of single points and single elements associated with an ocean disaster changing over time, 2D graphics of a single point and multilayer elements varying with depth, graphics of a single element field changing over time, and simulation of multilayer dynamic processes.

Direct extension (DirectX, DX), which is developed by Microsoft, is a multimedia programming interface. Implemented in the C++ programming language and following the standard of COM, DirectX can let the Windows platform execute more efficiently for games or multimedia applications. It can strengthen the 3D graphics and sound effects, and it provides the designer with a common standard for hardware drivers. Thus, the developers do not have to write a different driver for each brand of hardware. At the same time, this strategy also reduces the work required for the user to install and set the hardware.

DirectX is composed of many APIs. According to the property, these APIs can be divided into four categories: display, voice, input and network. The display part is the key of the graphics processing. This part can be divided into DirectDraw (DDraw) and Direct3D (D3D). The former is mainly responsible for 2D graphics acceleration, and the latter is mainly responsible for the display of 3D effects and for providing an interface for the built-in 3d color modulation of most new video adapters. Direct3D provides powerful and efficient communication methods between the software programs and the hardware accelerator. It contains a special CPU instruction set that can provide further support for new computers. The core API of the voice part is DirectSound. This part is responsible for playing sound, audio mixing, strengthening the 3D sound, and recording. Sound card compatibility problems are solved by DirectSound. The input part, API DirectInput, supports a variety of input devices and makes these devices work at optimal levels. These devices include not only the keyboard and the mouse but also the handle, rocker, simulator and so on. The network part, API DirectPlay, is mainly responsible for network development. It offers a variety of connections, such as TCP/IP, IPX, modem, and a serial port. In addition, it also provides a network dialogue function and security measures.

The DirectX Utility Library is a program framework established on the Direct3D 9 and Direct3D 10 APIs. It is used to create a series of powerful and convenient Direct3D development samples, prototypes and tools. DXUT works on Direct3D 9 and Direct3D 10. A program building on DXUT can easily use the API. DXUT simplifies the application of a typical Windows and Direct3D API.

The system develops a 3D visualization engine SeaGeo for ocean environmental data based on DXUT. Developed with the base of the MFC ActiveX framework, SeaGeo supports Direct3D 9.0 and its packaging component DXUT. The spatial coordinate transformation adopts DXUT camera components. From top to bottom, the engine can be divided into the control interface, 3D realization and data management, which are respectively implemented by the classes CseaGeoCtrl, CSeaGeo3D, and CdataFile.

Field data are stored in the BFILE format in the data warehouse. According to the user’s query, we can retrieve the data obtained from the access layer of the visualization software in the data warehouse and form the 3D data field. Class CDataFile can read this data field into the computer memory and assign values to the 3D structure array (struct DATAGEOS for 3D scalar fields and DATAGEOV for 3D vector fields as well as build the relationship between each grid point’s attribute and color (or the direction of the arrow and its color). These structures are used to establish a MESH on each section for Class CSeaGeo3D and start rendering it. The functions of Class CDataFile are shown in Table 7.3.

Table 7.3 The main functions of Class CDataFile

Class CSeaGeo3D is mainly used for 3D rendering. Its main features include 3D scene initialization, building the profile of the MESH, rendering of the MESH, and picking up attribute values of the node in the scene. Creating a MESH means assigning the color corresponding to the values of every grid calculated in the Class CdataFile to the color attribute of the vertex object. The function of CSeaGeo3D is shown in Table 7.4.

Table 7.4 The main functions of Class CSeaGeo3D

7.3.2 Software Development

7.3.2.1 Single-Point and Single-Layer Element Changes Over Time

The main function of this module is to display the data of coordinates in the element field changing over time based on the earth sphere model. We express this change through the curve of the single-point elements of changes in two-dimensional field data over time. The visualization method of the point process aims at space point objects. Time series data for each station can be inversed into a point process for this station. The spatial locations of these point data are relatively constant. They change as the time attribute information changes. For example, the seasonal changes of water temperature can be expressed by a point process. Ocean elements change over time. This change could be expressed by a continuous curve of 2D geometric coordinates, with the horizontal axis as time and the vertical axis as property values. The interface is shown below (Fig. 7.9, Tables 7.5 and 7.6).

  1. 1.

    Input and output items

Fig. 7.9
figure 9

The function interface of single-point element changes over time

Table 7.5 List of the input items of the “single-point elements changing over time” module
Table 7.6 List of the output items of the “single-point elements changing over time” module

When accessing the sub module, users can choose the element type in the element type drop-down list box (including four options: temperature, salinity, density and sound velocity). An operation type for “single point of time changes” is selected in the operation type drop-down list box. The data type of forecast data is selected in the data type drop-down list box. The time and deep layer in time are selected in the deep layer drop-down list box, and OK is clicked. Users set up a start time to show a week’s forecast result, and the interval time is 24 h.

After clicking the OK button, the user can see the data area on the sphere model. The cursor turns into an arrow. After the user clicks on any position on the data area, a new page will show the data curve of the position. After clicking the cancel button, the temporarily-created position label and the regional box are removed and returned to the initial state.

7.3.2.2 Single-Point and Single-Layer Element Changes Over Sea Depth

The main function of this module is to display the data at a set of coordinates in the elements field that are changing over sea depth based on the earth sphere model. We express this change through the curve of the single-point elements of two-dimensional field data changes over sea depth (Tables 7.7 and 7.8).

  1. 1.

    Input item

  2. 2.

    Output items

Table 7.7 List of the input items of the “single-point elements changing over depth” module
Table 7.8 List of the output items of the “single-point elements changing over depth” module

When accessing the sub module, users can choose the element type in the element type drop-down list box (including four options: temperature, salinity, density and sound velocity). The depth change of a single-point change is selected as the operation type in the operation type drop-down list box. The forecast data is selected in the data type drop-down list box. The time is selected in the time drop-down list box, and OK is clicked.

After clicking the OK button, the user can see the data area on the sphere model. The cursor turns into an arrow. After the user clicks on any position of the data area, a new page will show the data curve of the position. After clicking the cancel button, the temporarily-created position label and regional box are removed and returned to the initial state.

7.3.2.3 Single Element Field Dynamic Changing Over Time

The visualization of the element field is very intuitive. The user can easily visualize the changing process of the physical value through the visualization of element field process. The element field object can be a flat or an arbitrary angle profile.

The visualization of a single-element field process changing over time can be realized in two ways: animation and rendering directly. Animation generates animation files by the program or animation generation tools (such as Moviemaker). In one time sequence, the images of different discrete times can be generated through an image generation interface. These images are used as consecutive frames to generate an animation. The similarities of the two methods are that they are both based on the idea of snapshots and that they convert an element field to a sequence of snapshots. However, the direct rendering method does not generate the image, but it renders the data directly after the user sends a request and renders the data of the next moment after a certain time interval. Compared with the animation, rendering is a real-time method (it does not need to generate frame by frame images and animations). Thus, rendering is easier for human-computer interaction. Users can pause at any time in the rendering process and use the mouse to pick up points for queries.

The main function of this module is to display the tidal data field of the water level changing with time based on the spherical earth model. The data are derived from the Taiwan Strait. To show the tidal bedding face data, different colors represent different water levels. This module also provides the function of querying of a single point of data and the function of numerical mapping of regional data. The visualization of surface processes is very intuitive. We can easily recreate the process of changing physical values through the visualization of the surface process and by finding the rule of this change. The interface is shown below (Tables 7.9 and 7.10).

  1. 1.

    Input item

  2. 2.

    Output items

Table 7.9 List of input items of the “single-factor data changes over time” submodule
Table 7.10 List of output items of the “single-factor data changes over time” submodule

After accessing the sub module, select the tidal bedding face data in the user data type drop-down list box. Tidal bedding face data include: dynamic change, information query and numerical mapping. Time is selected in the time drop-down list box. Tide forecast data time resolution is selected as 1 h, namely 24 frames of data per day: 24 data in 1 day is chosen for dynamic display (the default). The “data load” button is clicked, then 24 data frames are loaded.

After data loading is completed, if the operation type is “dynamic”, the start button is clicked. The dynamic change process of the tidal bedding face data starts to show in the scene of the three-dimensional sphere. After clicking the cancel button, the location label is removed. A regional box is temporarily created, and the 3D sphere window returns to its initial state Figs. 7.10 and 7.11.

Fig. 7.10
figure 10

The visualization of sea surface temperature field

Fig. 7.11
figure 11

The visualization of sea surface wind field

After data loading is completed, if the operation type is “information query of a single point”, the cursor will turn into an arrow. When the user clicks the data area of the sphere model, the water level of the location that the user clicked on will display on the left side of the page. After clicking the cancel button, the location label and the temporarily created regional box are removed, and the 3D sphere window returns to its initial state.

After data loading is completed, if the operation type is “regional information query”, the cursor will turn into an arrow. When the user clicks the data area of the sphere model twice, a data rectangle is generated. After the query of the data files, the result is shown in the form of numerical mapping on the sphere model. After clicking the cancel button, the location label and the temporarily created regional box are removed, and the 3D sphere window returns to its initial state.

7.3.2.4 Dynamic Changes in the Multilayer Elements Field Over Time

The main function of this module is to visualize and display the ocean environmental factors in 3D. The feature types include sea temperature, salinity, density, sound velocity, and current. Through the 3D visualization plugin, we can query and display the elements of a single point of information. This system adopts the method of profile reconstruction of a static 3D visualization data field. Through the 3D visualization plugin, the 3D profile data structure is constructed and the data is then rendered frame by frame (Tables 7.11 and 7.12).

  1. 1.

    Input item

  2. 2.

    Output items

Table 7.11 List of input items of the “multilayer elements field dynamic process simulation” module
Table 7.12 List of output items of the “multilayer elements field dynamic process simulation” module

After accessing the sub module “operation page load” at the same time, the main window is redirected to the 3D display page.

The operation type is selected as “3D display” in the user operation type drop-down list box. The data type is selected as “forecast data” in the data type drop-down list box. The time is selected in the time drop-down list box. The OK button is clicked and the 3D data is loaded.

The 3D data of the element selected by the user will show in the 3D window. The left side of the function page shows the legend and operations, including the “mark”, “land template”, and “boundary”. The check box corresponds to the contents shown in the window. The check box options such as “depth”, “longitude”, “dimension” and “arbitrary” are used to control the angle.

After data loading is completed, the data would display in the 3D window. The space key is clicked and then the 3D data grid begins to be displayed. When selecting a profile type as “depth”, the “play” button is activated. The “play” button is clicked, and the changing process of 3D field data from the ocean surface to the underlying water is displayed.

In the 3D window, the user can control the point of view and the section position through the keyboard. The “depth”, “longitude” or “dimension” checkboxes are checked in the 3D control panel. The keyboard “ + ” and “−” (for notebooks, use CTRL + numlk to open the keypad) are used to control different layers or different longitude or latitudes. The “arbitrary” checkbox is checked, and the t, g, f, and h keys are used to control the first point of latitude and longitude and the i, j, k, l keys are used to control the second point of latitude and longitude. When checking the “depth” checkbox, you can click on the “play” button. The 3D window will automatically display the changing process of a single element along with the layer depth. The “mark”, “land”, and “boundary” checkboxes are checked, and the corresponding template is then loaded into the 3D window.

When the element type is salinity, density, sound velocity or flow field, the OK button is clicked. The corresponding data will be loaded into the 3D window. The other operation is the same as with the operation of the temperature field described above. When the element type is “flow”, “show vector arrow” is checked in the 3D control panel. The arrows indicate the flow field of the vector in the 3D window. After clicking the cancel button, the location label and the regional box that was temporarily created are removed, and the 3D sphere window returns to its initial state.

The dynamic process simulation of the elements field of a multilayer can be realized in two ways. The first method is to display the data from the surface to the underlying water to indicate the numerical change in the vertical direction. The second method is to visualize the dynamic process of the multilayer elements field through a vertical section of the dynamic process. With the aid of the 3D visualization engine, the latter constructed a 3D profile data structure at first, then it rendered the data frame by frame. By controlling the interval time, the method can achieve the purpose of expressing a body process. The process of a 3D salinity field data profile is shown in Figs. 7.12 and 7.13.

Fig. 7.12
figure 12

The North latitude 23.5° profile of 3D salinity field visualization process

Fig. 7.13
figure 13

The east longitude 130° profile of 3D salinity field visualization process

To improve the speed of real-time rendering, the system refers to a spatio-temporal data model for the ground state in which to render the data. In this model, the ground state generally refers to the data of the system last updated, namely a complete rendering of data for a moment (all grid points are rendering). The ground state distance means the number of adjacent two-ground-state data frames. The sea flow field data are different from other spatio-temporal data, because the land (continental shelf, continental shelf template) is constant, so no object will be added or deleted. Change domain means that the values of the domain attribute are different between the two adjacent data sets (including vector data of two attributes: the size and direction; any changes in the two attributes are considered as object state changes).

The ground state distance is mainly influenced by product data interval, area size, and grid density. The combination of single moment data volume and time sequence influences the distance. The density of a data grid will cause the total amount of data to grow geometrically. The ocean changes constantly. The change domain is determined mainly by the data reflecting the sea conditions at that moment. When the selected parameters change distinctly, the domain can be relatively large. When it starts to render, the system will determine the ground state, and the correct domain is dynamically selected according to the length of the selected time slot and the sea area. The grid point data values are compared. The grid is skipped when the data value is in the domain. The grid is rendered only when its value is out of the range of the domain. This strategy can save some rendering time to some extent.

7.4 The Prototype System of Analysis of the Storm Surge Disaster Process

7.4.1 Data Structure

The numerical forecast and data processing of storm surge is shown in Fig. 7.21. At first, according to the typhoon track data from Japan, South Korea, the United States and the national weather forecast, as well as years of experience, we forecast a relatively accurate typhoon path. Then, parameters were input into the system in the standard format. Numerical calculations were conducted and the storm surge forecast data were generated, including site forecast data and field data. Finally, the process in time and space was expressed and released online Fig. 7.14.

Fig. 7.14
figure 14

Flow chart of the storm surge forecast and data processing

The existing numerical calculation model generally adopts the method of segmenting the research object to simplify the boundary problem in the calculation of the model. The segmentation in the storm surge forecast model is mainly manifested in two aspects. The first one is the grid segmentation of the study space. The other one is discretization processing of the storm surge process in time. The output information of the model is a collection of discrete information in time and space.

The output data of the storm surge numerical forecast has the following characteristics: (1) Regarding a calculation model, the study area is usually fixed, especially those numerical models aimed at local characteristics. The model parameters always contain some regional experience parameters. Thus, the output data of the model is often recorded for the value of the element only. The relative positions are represented by the sequence of data, and the areal coverage is reflected in some parameters of the model. (2) Considering the structure of the initial input data and the model calculation problem, we use the regular shaped grid to divide the whole study area into a series of smallest uniform cells. In this way, the output values are the values of the corresponding elements of each cell. (3) For the predictive calculation, the time step is constant in this ocean disaster forecasting model. (4) Forecast data are divided into two parts: site forecast data and field data.

Therefore, the output data of the ocean storm surge disaster forecast model is a group of sequences repeated in a particular time interval and arranged in an orderly way. Each of these numerical sequences corresponds to a series of regular grids with small areas. To consider the model in the view of GIS, the forecast data at one single time is very similar to the raster image data in terms of form. The differences are just in the method of recording and in the meaning of the data.

Based on the analysis of the characteristics of storm surge forecast data, the system needs to solve the standardization of the storm surge forecast data in a distributed network environment and for the network transport problems, so that the forecast data can be displayed and released on a unified platform. A typical application is the “digital ocean” project in China. In this project, to avoid repeated construction, each node at the provincial level is mainly responsible for production and release of the forecasting data. The National Center for Ocean Information is responsible for the operation and maintenance of the whole system to achieve the goal of the storage, distribution, and release of the data. Thus, to transfer the data through the Data Service Bus, it is necessary to standardize the storm surge forecast data.

Considering the requirements of multi-type data and scalability, we standardize the data by XML. The features of XML are that it separates the structured data from the other described information and that it allows the structured data from different sources to merge (integration). The client can easily extend the XML data to adapt to various data application requirements. From the perspective of the data description, XML is flexible, scalable, and it has good structure and constraints. From the perspective of data processing, its format is simple and easy to read, and it is easy to be processed by the program.

The process of transmission, display and release of storm surge forecast data is as follows. At first, the storm surge forecast site data and field data are produced by the node storm surge forecasting system. Second, the data are transformed to standard XML and are transferred to the data server through the data bus. The server will designate storage space for the data received. When the transmission is over, the data sever will add new records in the log. The monitoring service will read the log records to find the data directory and start to process these data. The service will generate visual raster data and update the process flag. The network publishing web foreground program can find that the new data has been processed at this time. “Loading data” is selected in the interface. The raster data, which has been processed, will be downloaded from the web server to the client. After matching the projection and optimizing the detail, storm surge data are loaded into the sphere. Now, the users can browse the dynamic space-time process information of storm surge, the tide field of each time and the water level of each point.

The numerical forecast of storm surge generates site data. These data contain a series of site latitude and longitude coordinates, adding water data, astronomical tide level and total water forecast values. These site data are discrete and are represented by multiple time sequences. Based on this characteristic, we can design a data storage structure of time series points combining elements of the ocean data model in terms of time sequence point storage structure.

The time sequence type description table is the description of specific forecast parameter value types, such as the name and units of the specific parameter, whether the time step is regular (this attribute field of the irregular time series could not be defined), data type (mean, maximum, minimum, instantaneous value, etc.) and specific source (including the observation record and numerical model for calculating). The table can also include other description attribute information. The user can extend the table in a flexible way. For example, the storm surge water level, the total water level value and astronomical tide are important for analyzing the storm surge.

The time series storage table mainly stores the specific parameter values and timetable (Yasuko et al. 2010). The table is associated with a time series description table by the field of time sequence and number. This field is defined for elements for the situation where a point element has several time sequences.

The transition table is the connection table between the time sequence elements and the time series numerical storage table. The time sequence of one single parameter type may correspond to several elements of time elements. One element of single parameter types may have multiple time sequence values. To address this problem, a transition table is designed.

Using the storage structure of the time sequence of point elements above, we could store the storm surge forecast time series data efficiently. The advantage of this method is that it is concerned with multiple time sequences of storm surge forecast data. It is advantageous to query, retrieve and visualize the storm surge forecast data.

The storm surge numerical forecast data belongs to the scalar field data. It is unusual at present in the research of the organization and storage of scalar field data. Most existing methods are based on file management. We design a suitable storage structure for storm surge forecast field data, combining the field data storage structure with data in the grid model.

In the grid model, we take the area of some field data in a different dimension as a grid. This grid constitutes the basic structure of the whole grid network (Ying jie and ZhenhuaLv 2015). This method can effectively store and manage the regular vector field data and the scalar field data. In this article, the water level data of the 2D scalar field is the primary data.

As shown above, the scalar field numerical storage table stores the specific time and the numerical data associated with the field data. The grid elements represent the center of the field data in the grid unit. Its property fields represent the location information a grid point, including the grid number, grid column number, etc. The grid object table describes the entire field area, including the longitude and latitude of the starting position, the resolution, the total number of grid points, and so on. The time index table is designed to speed up the query of time elements. It assigns different times to different identities. This method could avoid low efficiency when querying data in a single table.

Based on this storage structure of the grid model, we can effectively manage storm surge numerical forecast field data. When the region of ocean elements field data is very large, the data have many spatial resolutions or the resolution in terms of latitude and longitude are different, we should also consider the problem of space region partitioning to efficiently access and query data.

7.4.2 Software Structure Design

The user inputs related parameters in the system interface. The prediction and analysis of the model needs necessary parameters. These parameters are used to trigger sub-functions. Combined with the relevant corresponding data from the data layer, the calculation of the model is completed. The result will be stored in the corresponding dataset in the data layer Fig. 7.15.

Fig. 7.15
figure 15

The software framework of the typhoon storm surge process analysis prototype

7.4.2.1 The Design of the Data Layer

The data layer is the foundation of the whole system. It mainly includes basic geographic information, auxiliary information and process information. From the perspective of the data set, it is divided into image data sets, the terrain data set, the social and economic data set, the data set of storm surge disaster prediction, the data set of analysis of the disaster result, etc. The layer is based on the Oracle database management platform. Oracle can manage the spatial data and attribute data efficiently. The content is as follows:

  • Basic geographic information

Basic geographic information is the basis for simulating a real 3D scenario and storm surge disaster forecast and analysis. It includes the global basic terrain and 1 km resolution remote sensing image data, coastal key research area high resolution terrain and image data, 1:250000 administrative division place names, terrain, roads, rivers, map data of key research areas, etc.

  • Auxiliary information

Auxiliary information is necessary for disaster impact analysis. Different requirements need different degrees of detailed data. Auxiliary information can be divided into observation data, social and economic data and multiple model data. The content includes coastal tidal stations, the measured water level, station location information, population density with administrative region as the statistical unit, population, economy, land, lake area, other regional coastal buildings, offshore facility models, etc.

  • Process information

Process information is the intermediate results of information produced with operation of the system. The information mainly includes prediction information of sea level, scope of flood information, analysis information of the effects of disasters, etc. The output will not change if the basic parameters are the same between different processes. Thus, the storage of the process information can raise the efficiency of the system.

7.4.2.2 The Design of the Function Layer

The functional layer is a collection of functions of the server. It is the core part of the whole system. It can be divided into three major sub functions: data query and expression, storm surge disaster prediction, and disaster impact analysis.

  • Data query and expression

This module integrates the function in the software of multi-dimensional expression of ocean disaster elements in space and time.

  • Storm surge disaster prediction

The storm surge disaster prediction sub-function is realized by the integration of the storm surge disaster prediction model. It can be divided into sea level forecast and the submerged range estimates.

  • Disaster impact analysis

Disaster impact analysis is used for the statistics of the disaster area and the social economic index of this area. This function uses the prediction results in the storm surge disaster prediction and spatial analysis function in GIS.

7.4.2.3 The Design of the User Layer

The user layer is the interaction window between the user and the system. This layer realizes the client operations through the browser and plugin. It can be divided into three parts: parameter control, visual control and results display.

Parameter control determines the function and the parameters according to the input parameter and menu that the user chooses, and it triggers the corresponding sub-function needed. Visual control can realize the function of zooming and positioning on the sphere by the control panel. The system determines relevant parameters as part of the data extraction and optimization of parameters based on the state of the user perspective. The results are shown in the final results window. This part renders and displays the data back to the client’s scene. These data are processed through a 3D plugin on the server.

The analysis prototype system of storm surge disaster in time and space adopts a three-layer structure. It includes the data layer (data supporting), the application layer (application service supporting) and the user layer. The configuration of the software environment of the data layer varies with the different data from different servers. The software needed mainly includes the operating system on the server, the 3D sphere scene software, and the database and spatial database engine. The application layer is a program of the deployment management prototype system. It provides browser-based access services for end users. The software includes the operating system on the server, the dynamic link library components that are dispensable for the application, etc. The configuration of the user layer refers to the configuration of the machine that needs to access the system and use all of the functions. In addition to the basic operating system and the browser, this machine still needs to download and install the corresponding plug-in. The list of the modules of this system is shown in Table 7.13.

Table 7.13 List of modules of the analysis prototype system of the process of storm surge disaster in time and space

7.4.3 Software Development

The typhoon spatio-temporal data query sub-module manages the historical typhoon data. It provides 3 query methods: query by the typhoon number, query by the time period and query by the typhoon level. This module returned typhoon information (one or more) to the user in the form of a data table, including the coordinate information of the typhoon path, wind speed, and wind direction. Its interface is shown below.

The site data generated by the storm surge numerical forecast contains a series of latitude and longitude coordinates of the sites, adding water, astronomical tide levels and total water forecast values. The system could also mark the site of the warning level on the sequence diagram with a red line. It shows the relationship between the numerical forecast of the total level of storm surge and the warning level.

Combining the storm surge forecast sites with the query of the forecast time sequence, we could realize the expression of adding water, astronomical tide, and total water level of storm surge forecast sites.

The field data of a storm surge forecast is raster data in a time sequence. The value of the raster data is the value of the storm surge water level on this grid. The grid data of storm surge forecast is converted into a sequence of texture feature information. The mapping relationship is kept between texture feature information and the original data. The texture information could transfer to the original forecast data through the inverse process of the mapping relationship. Thus, the visualization and query of storm surge tide field data can be realized. The principle is shown as follows. The sea surface height of storm surge information is expressed by color. A corresponding criterion is established between the sea surface height value and the color texture. A color sequence can be generated by this relationship. The color sequence is filled in the corresponding grid. We can obtain the corresponding texture map data. The texture mapping relationship can be expressed by the following equation:

$$ \mathrm{H}\left(\mathrm{i}\right)=\mathrm{R}\left(\mathrm{i}\right)\mathrm{G}\left(\mathrm{i}\right)\mathrm{B}\left(\mathrm{i}\right) $$

H represents the relative height of the storm tide. R, G, and B represent the three primary colors, and ‘i’ is the value of the corresponding ordinal. One sequence (e.g., 123321123) will yield a corresponding color value mapping sequence. The structure is shown in Fig. 7.16.

Fig. 7.16
figure 16

The mapping of storm surge texture field data

By the same token, texture information can be inverted to the original information according to the corresponding standards. Texture data could be generated through the former, whereas the latter is used to query the results of the original visual information. We can display the change in the whole storm surge water level dynamically and query the value of any point or any time. We can display the multilevel resolution information after establishing the image pyramid. The water level information of the powerful typhoon “SEN” is shown as Fig. 7.17.

Fig. 7.17
figure 17

Tidal information on 2008-09-09 14:00

The field data of the numerical prediction of storm surge is commonly a grid dataset as a time sequence. Each moment corresponds to a tidal field datum. These data are rendered into raster data. The raster data unit value is the value of storm surge water level in this space grid. The field data of storm surge forecast can be processed as a sequence of texture feature information. The mapping relationship is kept between the texture feature information and the original data. The texture information could transfer to the original forecast data through the inverse process of the mapping relationship. Thus, the visualization and query of the storm surge tide field data can be realized.

The query and expression of the storm surge tide field data can be divided into three modes: (1) the query is based on a single position at some storm surge water level value; (2) the entire field of storm surge water from a static query display is based on a certain point in time; and (3) visual expression of spatial and temporal variation processes of water is based on a certain period of time in the process of a storm surge.

Figure 7.17 is a static display of storm surge water level at some point based on the field. The left and right sides show the storm surge water level information on September 9, 18 and 21 of 2009.

Figure 7.18 shows the dynamic change process as a function of time based on the field of the storm surge water level. When the system operates in actual use, the user needs to choose the beginning and ending times. The image below corresponds to the dynamic change of storm surge water level information in one moment.

Fig. 7.18
figure 18

The comprehensive visual expression of typhoon storm surge process

Various methods are synthesized in a scenario. Multiple factors are shown in different ways by calling the different visual expression methods of a space object. The pic shows the comprehensive expression of the No. 0813 typhoon storm surge process. The typhoon path and the storm surge water field are displayed in the same scene. We could see that the water level increases distinctly in the typhoon eye. At the same time, this system supports the comprehensive visual expression of cloud and numerical mapping in the same scenario or when using two-dimensional scalar field data, two dimensional scalar field data cloud or two dimensional vector field diagram methods.

The storm surge of a powerful typhoon storm (0608) during August of 2006 is taken as an example for analysis. The site data includes Shacheng, Sansha, Meihua, Pingtan, Wungang, Chongwu, Shenhu, Xiamen and Dongshan. The three-dimensional flow field data adopted the calculation results by a numerical model developed by Xiamen University. The disaster field three-dimensional grid interval is 1/32 degrees; it has 25 layers, and it has a vertical space range of 110 to 130 degrees east longitude and 18 to 30° north latitude. Prediction and simulation experiments use the data of Tianjin Binhai, a new area combined with the national ocean information center forecast data.

The hazard prediction results expression module is responsible for the prediction of storm surge disasters on the basis of the analysis of storm surge data. The hazard prediction includes GDP, population, transportation facilities, water area, and so on.

7.5 The Analysis of Sea Level Rising

7.5.1 The System Structure

This system builds on the LAN environment. It can be divided into the user layer, the function layer and the data layer. The system framework is shown in Fig. 7.19. The data layer is mainly used for the organization and management of the system data, and it provides data support for the realization of system functions. The data mainly includes basic data and process data. Function layers mainly implement data processing and extraction, analysis of a range of sea level rising forecasts, flood and disaster impact analyses, and other functions. Finally, this layer provides forecast and analysis results for the users. The user layer provides users with a browser-based parameter control interface and information display window based on a three-dimensional sphere.

Fig. 7.19
figure 19

The software framework of the sea level forecast system

User input parameters are related through the system interface. The prediction and analysis of the model need necessary parameters. These parameters are used to trigger relative sub-functions. Combining with the relevant corresponding data from the data layer, the calculation of the model is completed. The result will be stored in the corresponding dataset in the data layer (Zhang Xin et al. 2011, 2013).

According to user’s choice of scenarios, the required data is extracted and the data is optimized. After receiving this data, the client starts to render and display them.

The data layer is the foundation of the whole system. It mainly includes basic geographic information, auxiliary information and process information. From the perspective of the data set, it is divided into image data sets, the terrain data set, the social and economic data set, the data set of storm surge disaster prediction, the data set of analysis of the disaster result, etc. The layer is based on the Oracle database management platform. Oracle can manage the spatial data and attribute data efficiently. The content is as follows.

  1. 1.

    Basic geographic information

Basic geographic information is the basis of simulating a real 3D scene and storm surge disaster forecast and analysis. It includes the global basic terrain and 1 km resolution remote sensing image data, coastal key research area high resolution terrain and image data, 1:250000 administrative division place names, terrain, roads, rivers, map data of key research areas, etc.

  1. 2.

    Auxiliary information

Auxiliary information is necessary for disaster impact analysis and the display of results. Different requirement need different degrees of detail in the data. Auxiliary information can be divided into observation data, social and economic data and multiple model data. The content includes coastal tidal stations, the measured water level, station location information, population density with the administrative region as the statistical unit, population, economy, land, lake area, other regional coastal buildings, offshore facility models, etc.

  1. 3.

    Process information

Process information is the intermediate result of information produced by the operating of the system. The information mainly includes predictions of sea level, scope of flood information, information related to the analysis of the effects of disasters, etc. The output will not change if the basic parameters are the same between different processes. Thus, the storage of the process information can raise the efficiency of the system.

Functional layers are a collection of functions of the server. They are the core part of the whole system. They can be divided into three major sub functions: sea level rising forecast, disaster impact analysis and data extraction and optimization.

  1. 1.

    Sea level rising forecast

The sea level rising forecast sub-function integrates the forecasting model. The forecast can be divided into sea level forecast and submerged range estimates.

  1. 2.

    Disaster impact analysis

The disaster impact analysis function uses the prediction results in the rise in sea level forecast sub-function. It analyzes the disaster area and the related social economic index of statistics through spatial analysis in GIS.

  1. 3.

    Data extraction and optimization

Data extraction and optimization improve the effectiveness and efficiency when showing the result. According to the parameters that the user chooses and the angle of the view, the system determines the type of data extraction and the degree of optimization, and it provides the required data for the client.

The user layer is the interaction window between the user and the system. This layer realizes the client operations through the browser and plugin. It can be divided into three parts: parameter control, visual control and result display.

Parameter control determines the function and the parameters according to the input parameter and menu that the user chooses, and it triggers the corresponding sub-function needed. Visual control can realize the function of zooming and positioning on the sphere by the control panel. The system determines relevant parameters as the data extraction and optimization of parameters based on the state of the user perspective. The results display is the final window for displaying results. This part renders and displays the data back to the client’s window. These data are processed through a 3D plugin on the server.

7.5.2 System Implementation

7.5.2.1 Model Integration Based on Service

The sea level rising forecast model is mainly used for sea level change predictions and the estimation of corresponding flooded areas. Currently, there are two methods to forecast sea level change: the observation method and the numerical simulation method. The observation method forecasts by changing trends based on years of sea surface height observation data. The numerical simulation method is based on a certain prediction model. This method predicts sea surface height through the adjustment of the parameters under different conditions. The estimation method of the flooded area can be divided into non-source flood and source flood.

The system adopts the numerical simulation method to predict sea levels in the forecasting model of sea level rising in the Tianjin area. The estimates of the flooded area are improved at the base of the source flood algorithm. By adding water bursts into the model, the method is used to calculate the evolution of the water. The specific idea is as follows. The user sets the tide level, risk prediction and embankment parameters to calculate the largest rise in sea level. A sea surface of 0 m is taken as the original surface. The range of the submerged area when the sea level risings to a certain height through a flooded area model is estimated. The data set predicting changes in height from 0 m in the same interval is produced.

The sea level rising forecast model varies with the prediction area. In this system, we adopt the web service technology to encapsulate the existing model uniformly. The system calls the service dynamically according to the region that the user specifies. In the process of service encapsulation, the system follows unified rules of the service description so that the system can realize the integration of the different models and calls through a unified interface.

We use standard XML files to store the description of model service according to the following rules:

  1. 1.

    The file must contain three level 1 labels: forecast area, controllable parameters and input data.

  2. 2.

    The forecast regional tag must contain the name and area. Text is used to describe the model forecast area in the label name tags, such as “Tianjin”, etc. A coordinate (X0, Y0, X1, Y1) is used to determine the precise location of an area in the area label. The four values represent the longitude and latitude of the upper left corner and the longitude and latitude of the bottom right hand corner, respectively. At least to two decimal places are kept.

  3. 3.

    Controllable parameters must contain the number of labels and the total number of parameters, which is same as the label number. Parameter labels include three level 3 tags: parameter descriptions, parameter types and parameter values.

  4. 4.

    The input data label must also contain a number of data labels that shows the total number of input data. This number is consistent with the number of data labels. Each data tag contains three levels 3 tags: data description, data type, and file location.

The above label uses a unified identifier corresponding to the system to ensure the standardization of different service descriptions and the identification of the system.

After the encapsulation of the model, the service description file and the service are released and registered at the same time in the system. The system provides users with an optional forecast area through the area list. The user needs to input the related parameters so that the system extracts the relevant data from the data layer based on the description of the area that the user selected. The forecasting model starts to predict and store the result into the corresponding data set.

7.5.2.2 Data Extraction and Optimization Based on Scenarios

The simulation of a real 3D scenario requires terrain data. The integration of image and other types of data leads to the expansion of data. The volume of a range area of a 3D scene is often several times or more the volume of the two-dimensional map data. For the system based on a network, the network transmission and the machine itself has many pressures to process mass data. To display the 3D scene more smoothly and to simulate rising sea levels based on the three-dimensional sphere, the amount of data must be reduced as much as possible, and the display effect should not be affected. Thus, it is necessary to extract and optimize the original data.

The area that the computer screen displays is visible to users. Under the premise of a constant window, the size of the object that can be distinguished increases with the increase of the visible range. This principle determines the extraction area and the optimization degree based on the user’s perspective.

  1. 1.

    The current user’s visible range and the surrounding related scene data and predicted results are extracted.

  2. 2.

    According to the current user’s visual angle height, the current optimization level is determined. The higher the angle, the higher the optimization level and the lower the actual resolution of that scene.

  3. 3.

    Based on a dynamic simulation rate, the data time optimization level is determined. The higher the rate, the higher the optimization levels.

  4. 4.

    The user perspective is kept unchanged when dynamically simulating the process of sea level rising.

In the Tianjin coastal area, which has the largest risk of a once-in-a-century tide, the predicted rises in sea level are taken as an example. The specific extraction and optimization method and processes include:

  1. 1.

    Extract: the current visual area (X0, Y0) – (X1, Y1) is calculated from the user perspective parameters and the screen size. The original data space is cut by the visual area. The data after the extraction are within the scope of this data.

  2. 2.

    Space optimization: The current user perspective height is obtained from the user parameters. The space optimize level and the optimization parameters (a, N) are confirmed. When the change of the slope of the adjacent nodes is greater than a, this node is retained. When continuous, the slope change of multiple nodes is less than a. When the node number is larger than N, 1 of every N is kept, and the data are optimized on the basis of the extraction.

  3. 3.

    Time optimization: the time optimization level and parameter h are determined by the rate identified by the user. The submerged range data are optimized according to interval h.

Table 7.14 shows the situation from different perspectives, including the optimization ratio in the same data simulation rate. The values in the table refer to the proportion of the current operating data and the data at the next higher level. The Tanggu area selected is approximately 1/100 of the area of Tianjin selected. Figures 7.20 and 7.21 show the sea level rising simulation.

Fig. 7.20
figure 20

The simulation picture of Binhai district in Tianjin

Fig. 7.21
figure 21

The simulation picture of Tanggu district in Tianjin

Table 7.14 The contrast table of optimized data in different areas