Introduction

Parallel to the need for new technologies and renewable energy resources to address sustainability, the emerging field of Artificial Intelligence (AI) has experienced continuous high-speed growth in the application of its capabilities of modelling, managing, processing, and making sense of data in the entire areas related to the production and management of energy. Moreover, the current trend indicates that the energy supply and management process will eventually be controlled by autonomous smart systems that optimize energy distribution operations based on integrative data-driven Machine Learning (ML) techniques or other types of computational methods.

Computational techniques can be applied in a broad range of applications related to sustainable implications including life and health sciences, environment and ecosystem, and product and process optimization by taking data and analyzing them to provide recommendations for improving sustainability parameters. Thus, the integration of computational methods can be a solution to sustainability challenges. Any product can be designed to be more efficient and optimized if it is modelled, analyzed, and tested in advance before it is built. The Digital Twin (DT) is a novel coupled approach for any form of modelling and analysis based on big data and AI/ML techniques.

Digital twin in general refers to the creation of computational models or platforms by monitoring, modelling, optimizing and predicting a complex interdisciplinary system based on real-time big data sets. In terms of the digital twin, any forms of computational techniques including the Internet of Things (IoT), AI, ML, and analytics may be integrated to create live digital models able to update and change information as needed. Digital twin models are self-learning systems in the sense of continuous learning and updating from multiple sources to reach real-time status and are regarded to serve as a panoptic reflection of a physical body in the digital world.

The physics-based computational digital twin is a unique technology that focuses on bilateral interdependency between virtual and physical representations, and as a consequence, benefits the product in the sense that it can adapt to modify its real-time behaviour simultaneously to the feedback generated by the digital twin. Conversely, the bridging allows the simulation to be able to precisely mirror the real-world condition of the physical body (see Fig. 7.1).

An exciting aspect of the digital twin is the potential to break the classical Product Lifecycle Management (PLM) paradigm with fixed static steps in which Need defines Concept of the meant system and then turns it into Digital Design to facilitate Manufacturing step.

Fig. 7.1
A flow diagram of the classic design process. Need points to concept, design, and manufacture.

The classic design process

On the other hand, in the state-of-the-art design process, beyond all the initial and continuous sustainable resourcing and maintenance, an active step emerges to refine the product, i.e., Computations Aim at Sustainability. Moreover, the well-structured PLM platform integrated with AI/ML techniques is capable of offering a sustainable solution (see Fig.7.2).

Fig. 7.2
A flow diagram of the sustainable computational design process. Need points to concept, design, computational sustainability, and manufacture. Computational sustainability points back to the concept.

The sustainable computational design process

Nowadays, computations in terms of numerical models and simulations play a significant role in reaching the optimal sustainable solution. Meanwhile, the exponential growth of computational resources makes it available to utilize numerical methods in various scientific fields. To illustrate, a computational framework for the twin’s architecture in the form of data assimilation similar to that of weather prediction has shown a progressive accuracy concurrent with the development of computing technologies, especially in the last decade.

Mathematics for Sustainability

Real-Life Applications

Environment

Modelling of the Ocean Flows

Anthropogenic climate change is the greatest threat the world has ever faced. Sophisticated computational models simulating the physical dynamics of the atmosphere and oceans are essential to obtain a projection of future changes with respect to different scenarios designed by policy makers. Therefore, the availability of models that give accurate results in a feasible computational time is a substantial factor in decision-making to assess and prevent climate change’s catastrophic threats.

Numerical modeling of geophysical currents is crucial for predicting the state of the ocean and weather. It provides knowledge and understanding of the mechanisms that drive climate change, however, in order to evaluate all the significant flow structures, a resolution of the order of 0.1 mm is required. Such refined mesh is beyond reach even with modern supercomputers. Moreover, memory demand due to the large amount of degrees of freedom in consideration for a proper description of the flow system can be prohibitive. Hence, it is a challenge to perform the simulations for a sufficiently long period to observe the variations in the quantities of interest. For this, advanced techniques from reduced order modelling are applied in order to make such simulations feasible. The reduced modeling will be discussed in the following section. One of the results of the modeling of instantaneous vorticity distribution in the North Atlantic Ocean is shown in Fig. 7.3.

Fig. 7.3
A heatmap of the distribution of instantaneous vorticity in the North Atlantic Ocean.

Instantaneous vorticity distribution in the North Atlantic Ocean computed using methods of reduce order modelling

Large-scale Modelling of Urban Air Pollution

Urban air pollution leads to poor public health, global warming, and destruction of ecosystems. This dramatically increases deaths in the population, health care costs as well as magnifies even further the hazards of climate change. Therefore, mathematical modeling of the evolution of urban air pollutants is a very important tool to extract the knowledge from the observed data on air quality and make the prediction about the pollutants propagation in time and space. For instance, one of such models is the transport-diffusion equation, where the convective field is given by the solution of the Navier-Stokes equation, and the source term is an empirical time series. An example on an output of the model is shown in Fig. 7.4.

Fig. 7.4
The output of the model has several blocks, and the streamlines of the velocity overlap over them.

Streamlines of the velocity and a cross section of the concentration field

Optimization of Hybrid Energy System

Hybrid Energy System (HES) are such energy systems that can satisfy the power demand with both non-renewable and renewable energy sources. They play one of the central roles in solving the challenge of reducing our dependence on non-renewable energy sources when an immediate transition to renewables is not feasible. At the same, a clever way of managing the energy system is central in order to obtaining a substantial reduction in emissions.

Mathematical optimization is a great tool for obtaining such values of the control variables that reduces the overall emissions while maintaining the satisfying power demand. For instance, in the work on minimizing the emissions associated with the fuel consumption during the navigation of a vessel, a significant reduction of the values of the key performance indicators has been obtained by applying such statistical optimization technique as Simulated Annealing. The results presented in Fig. 7.5 show the reduction up to 31% even though in the work highly heterogeneous examples of the missions were presented. Hence, one may conclude that similar approaches can be applied to real-world scenario when variability and uncertainty are present.

Fig. 7.5
A double-bar graph of the comparison of the key performance. It plots bars of original E M S and S H E M S that are highest in particulate matter and lowest in carbon dioxide.

Comparison of the key performance indicators in standard energy management system (EMS) and SHEMS (Smart Hybrid EMS)

Life Science

Coronary artery diseases are one of the main causes of sudden death worldwide. Patient-specific nature of the arterial system makes it almost impossible to predict the appropriate time for the therapeutic intervention, empirically. Moreover, it is well-accepted to use animals, as the closest biological system to that of humans, for conducting research in this field [4, 18]. However, breeding laboratory animals demand high financial and human resources. Computational methods can be used to predict biological systems and reduce the necessity of laboratory experiments on animals. Blood flow hemodynamics has a direct influence on the biology of the arterial wall, and is closely linked with coronary artery disease development. Computational fluid dynamics (CFD) solvers can be employed to analyze the hemodynamic metrics, such as blood flow-induced shear stresses at the inner vessel lumen, to assess an individual’s coronary disease risk. Still, calculating hemodynamic indices using traditional CFD methods is relatively slow and relies on high computational resources. Consequently, CFD-based hemodynamic computation is not reasonable for integrated and large-scale use in clinical settings. Novel model reduction techniques such as neural networks integrated with CFD make it possible to lower the computational cost of the numerical simulations and at the same time to provide accurate predictions of the blood flow hemodynamics. In traditional pure CFD methodology, a patient-specific geometry is derived by the image processing and 3-D model reconstruction of the CT-scan images, and then, is modelled by CFD solvers to evaluate hemodynamic indices. In general, several simulations on different geometries are needed to derive a general relationship. Hence, it demands a high level of computational time and resources. On the other hand, modern model reduction techniques can reduce the computational time from days to seconds. The technique utilizes advanced mathematical methods to parameterize a system of equations and is trained by the set of simulations, a stage known as the offline stage. Then, this trained model can be utilized to predict every other geometrical and flow case in terms of seconds. The procedure is shown in Fig. 7.6.

Fig. 7.6
A flow diagram of biomechanical models. An imaging computational scan leads to patient-specific geometry, then to reduced-order models with three points of parameterized formulation, geometrical variation, and efficient solution by the POD-Galerkin method, and finally to fast simulations.

A sketch of reduced order framework for biomechanical models [6]

Ballarin et al. applied the mentioned methodology to conduct research on the blood hemodynamics study on patient-specific coronary artery bypass grafts [6]. Oscillatory Shear Index (OSI) is of great importance in recognition of the blood hemodynamics and vessel lifetime upon the rupture [5, 22]. Figure 7.7 shows the evaluation of OSI for different geometrical and flow conditions near the coronary arteries and bypass grafts near the anastomosis.

Fig. 7.7
On top, an illustration has LAD and d i a g of the left coronary and LITA of bypass grafts, which are indicated by different shades. At the bottom, 6 illustrations are arranged in 2 rows. The f power LITA and f power L C A have 1 and 1 for column 1, 1 and 1.33 for column 2, and 1.33 and 1 for column 3.

Left internal thoracic artery (LITA) to diagonal branch of the left anterior descending artery (DIAG) anastomosis for different stenosis (rows 1 and 2) and inflow conditions (columns 1 to 3). Coloured arrows denote blood flow direction [6]

In related studies, Siena et al. [45] and Balzotti et al. [8] utilized ROM-CFD based on the Feed-forward Neural Network (FNN) for the evaluation of the hemodynamic indices adjacent to the walls including wall shear stress. The predicted results based on the machine learning method showed a fantastic agreement with that of the Full Order Method (FOM), i.e., CFD simulation. To compare, the former took computational time of order of hours, whilst the latter is accomplished in just a few seconds (see Fig. 7.8).

Fig. 7.8
A line graph of C subscript W S S versus t by T. It plots two lines of target data and predicted data that initially remain constant, fall, rise, and then fall. Both lines overlap.

Time evolution of wall shear stress prediction provided by FNN (red line) and the FOM simulation (blue points) [6]

Process and Product Optimisation

Freight and passenger transport (land, air, sea and water) provide assistance to economic growth by making access to resources and markets. Eventually, it improves the quality of life linking persons to employment, health, education and other amenities. Thus, transportation takes an important role in economic and social development. Nevertheless, it comes with spillover negative effects such as congestion, pollution, depletion and resource-intensive consumption. Sustainable transportation is associated with the concept of clean transportation with the least impact on the environment.

Above all, sea transport is one of the main components of the world’s economy, as the largest carrier of freight around the globe. Motorised transport is over 95% dependent on oil and accounts for almost half of the world’s use of oil [54]. As a consequence, it attained a lot of concentration in the past few years in order to reduce the carbon footprint of sea transport by adopting sustainable practices.

Accordingly, the shipbuilding industry is making a radical change toward solutions with a smaller environmental impact by employing low emissions engines, optimized shape designs with lower wave resistance and noise generation, and by reducing the metal raw materials used during manufacturing. In a brand-new research study, Tezzele1 et al. carried out a structural optimization pipeline for modern passenger ship hulls which exploits advanced model order reduction techniques to reduce the dimensionality of both input parameters and outputs of interest [48]. Figure 7.9 demonstrates the geometry of the passenger ship in their research study.

Fig. 7.9
Two geometric passenger ship models. On the left and right are the complete view and a longitudinal section, respectively.

A complete view of the hull on the left, and a longitudinal section on the right [48]

Figure 7.10 below depicts the successive runs performed using a novel model reduction technique called POD-NARGPAS, to predict the reduced mass.

Fig. 7.10
A multi-line graph of optimization refinements of mass reduction versus iterations. The lines of run 1, run 2, run 3, and run 4 that initially remain constant, fall, and then remain constant, while the lines run 5 and best G P R that remain constant.

Results of relative mass reduction for different optimizations runs of the parametrized hull [48]

More than 7% of total carbon dioxide emission in the US is related to the healthcare industry, contributing to an estimated 479 million tons of CO\(_2\) each year [36, 49]. When assessed by sector, hospitals and clinics, medical structures, and pharmaceutics are the top emitters. Among these, pharmaceutical industries and drug development activities are believed to be among the top contributors [41]. Nowadays, drug development has become the exclusive activity of any pharmaceutical company. But interestingly, the output of new drugs has been decreasing for the past decade and the prices of new drugs have risen steadily, leading to access problems for many patients [31]. This may contribute to the fact that the drug development process involves a range of operations such as blending, granulation, milling, coating, tablet pressing and filling, and therefore, regarded as an interdisciplinary science of chemistry, mechanics and medicine [25]. Granulation, the process of particle enlargement by agglomeration technique, is one of the most significant unit operations in the production of pharmaceutical dosage forms, mostly tablets and capsules [44]. The complex physics of the granulation process can be predicted by mixing several numerical methods. Dompé Farmaceutici S.p.A. is one of the greatest biopharmaceutical companies and is engaged in innovative drug processes and biotechnologies. In this regard, in a novel research study, in collaboration with SISSA, they have developed a hybrid CFD-DEM model to describe the granulation process by taking into account both a thermal and dynamic balance between particles and flow. Discrete Element Method (DEM) is based on the Lagrangian frame of reference and is able to simulate particles with any shape and inter-bonds. Figure 7.11 shows the CFD-DEM simulation steps for granulation process modelling in the drug production system.

Interestingly, to exploit maximum computational capacity, the machine learning technique based on offline/online phases for training/evaluation of the data was employed on the model. Figure 7.12 compares the FOM results with that of ROM. In this model, due to a high number of particles (\(10^6\) \(\sim \) \(10^9\)), the computational time of FOM is of the order of days, while the ROM model took only a few seconds/minutes.

Fig. 7.11
The C F D D E M simulation of the particle granulation process consists of two steps. The steps are geometry simplification and three-dimensional C F D D E M.

Steps for the CFD-DEM simulation of particle granulation process

Fig. 7.12
On the left, the computational model has the labels of inlet, outlet, front and back, wall, L x, L y, and L z. On the right, the comparison of R O M results of a, b, and c has 50%, 70%, and 90% energy, respectively, and d is the result of F O M.

CFD-DEM simulation of the granulation process. Left: the computational model. Right: comparison of ROM results a, b and c and ROM result d

The invention of the first electrical appliances goes back to the first decades of the 19th century, meaning that home appliances have been making our lives easier for more than two centuries. Addressing appliance energy consumption is important both because of its present consumption and emissions, and also for its exponential growth. Household energy consumption represents a great portion of energy consumption in developed countries and in some cases even higher than that of the industry [24]. Although there have been many innovations over the past years, we still need to take a long way to reach a sustainability standpoint. Sustainable modern home appliances can reduce energy consumption by up to 50% [2]. Moreover, another aspect of sustainability is water consumption, especially in water-using appliances such as dishwashers and washing machines. Electrolux is a Swedish multinational home appliance manufacturer, headquartered in Stockholm. It is consistently ranked one of the top world’s largest appliance makers by units sold. Electrolux brand appliances have been making housework easier for more than a century. The Company’s products include refrigerators, dishwashers, washing machines, cookers, vacuum cleaners, air conditioners and small domestic appliances. Electrolux, as one of the leading providers of technological and modern home appliances, has been developing Research and Development (R &D) projects, particularly to pursue sustainable less energy- and resource-intensive products. In a recent collaboration with SISSA, they aim to reduce the water and electricity consumption of a professional dishwasher. A dishwasher is regarded as an energy-intensive home appliance. To illustrate, one cycle of dishwashing is equivalent to 20 hours of continuous TV running. The current technology uses an identical washing program for all the items in the washing machine. Whereas, rinsing for plates should be different from that of glasses, for instance. The idea was to implement an optimized image recognition device in the dishwasher to obtain a correct and suitable washing cycle. Meneghetti et al. developed an image processing technique for the image recognition device in the dishwasher based on the Convolutional Neural Network (CNN) algorithm to differentiate objects in the machine [29]. In Fig. 7.13 is the workflow of the CNN method used in the research. Such a system results in optimized water consumption in the washing cycle.

Fig. 7.13
An architecture of a typical convolutional neural network. The input has a photograph of a dog, which is linked to four sets of convolutional blocks in the feature learning part. Further, it is linked to the flattening layer, the fully connected layer, and the output in the classification part.

Architecture of a typical CNN which includes Feature Learning and Classification parts [29]

The next step of the project was to reduce the memory consumption of the image recognition device. To do so, they proposed a novel reduced approach for CNN and successfully developed a less energy-intensive device. More details of the project can be found in [30].

Fig. 7.14
Top left. An illustration has 3 layers of D top, D mid, and D B B. Top right. An illustration of five layers, where the top drawer and crisper are the fourth and fifth layers, respectively. A line graph of temperature versus sensor plots two lines in a fluctuating trend. A table at the bottom.

Sensor position in the fridge and validation of the temperature against experimental data

Another energy-intensive home appliance is the fridge. In general, experimental and numerical methods are used to predict and improve refrigeration efficiency in terms of energy saving and temperature maintenance. The cabinet and door gaskets play an important role in the heat transfer phenomena in the fridge. This complex system involves several physical phenomena including natural/forced convection, conjugate heat transfer (CHT), recirculation made by a fan and radiative heat transfer. Electrolux company, in another collaboration with SISSA, modelled air flow and heat transfer in the fridge and successfully validated numerical results (see Fig. 7.14). The model was based on the mass, momentum and energy conservation principles and the set of equations was solved with the well-known open-source flow dynamics solver, OpenFOAM.

Interestingly, CFD could provide us with every detail of the flow in the cabinet in terms of velocity and temperature for every working condition. For instance, the effect of fan on the ventilation in the cabinet is shown in Fig. 7.15.

Fig. 7.15
4 illustrations of the temperature distribution and velocity contour in the ventilated fridge. 1. Temperature ranges from 2.7 e + 02 to 2.9 e + 02. 2. Magnitude ranges from 0.0 e + 00 to 2.3 e + 01. 3. Temperature ranges from 2.7 e + 02 to 2.8 e + 02. 4. Magnitude ranges from 0.0 e + 00 to 5.0 e + 01.

Temperature distribution and velocity contour in the ventilated fridge, in the presence of: up) fan is off down) fan is on

The next part of the project deals with creating high fidelity database based on the validated CFD model for the real fridge geometry. To do so, an offline phase consisting of approximately one hundred simulations for different geometrical parameters was carried out. After implementing a suitable model reduction technique, the concluded library could estimate temperature distribution at any point of the fridge within a few seconds. Figure 7.16 compares the temperature distribution of FOM (CFD) and that of ROM. The ROM could predict temperature distribution with an error less than \(0.6\ ^\circ \textrm{C}\).

Fig. 7.16
Three heatmaps of the comparison of temperature distribution. 1 and 2 are C F D of K and reconstructed of K, both have 5 dots, and the area is covered with the gradient value 2.8 e + 02. 3 is an absolute error with 5 dots, and the area is covered with the gradient value 0.2.

Comparison of temperature distribution between FOM and CFD and error

Enhancement of Computational Performance

While the examples above themselves demonstrated the indispensable role of the computational modelling for sustainability, these simulations can demand high power and frequently high performance computing is required in order to make them accessible. Hence, in this section, we discuss how the simulations themselves can be more sustainable and use less energy to obtain nevertheless reliable results. Thus, one of the class of methods that provides an “energy-efficient” version of the original model is the Reduced Order Modelling (ROM) [10,11,12, 43].

Reduced Order Models

Many techniques have been developed in order to decrease the computational costs and the energy consumption of computational simulations. In the context of time-dependent or parameter-dependent problems, Reduced Order Models (ROMs) aim at building a surrogate model that can accurately represent the solution of the full order model (FOM) simulation within smaller computational costs. In some of these techniques, there are two phases: an “offline” phase, where a reduced space is computed and the ROM is learned and that still requires the costly computation of few solutions of FOMs, and an “online” phase, where the ROM is used for a fast and energy-saving evaluation of many ROM solutions [20].

One of the first developed model order reduction (MOR) techniques to compute reduced spaces is the proper orthogonal decomposition (POD) method [23, 26]. It uses some FOM solutions to extract the most representative reduced space that will be the basis for the ROM. Then, in the online phase, the much smaller ROMs can be used to run many simulations for different parameters/times using an infinitesimal amount of the energy used by the FOMs. Examples of the application of the POD can be found in [47] for optimal control flow in water simulations, in [52] within a weighted method for stochastic problems or in [51] for dispersive wave equations. The greedy algorithm is a technique that aims at reducing the energy consumption also in the “offline” phase [38, 39]. Indeed, it does not require the FOM solutions of the whole training set from which we want to learn the reduced space. Instead, it iteratively selects a new parameter, thanks to an error estimator, and it computes the FOM only of very few parameters and uses them directly to constitute the reduced space. The resulting method reduces the energy consumption also in the offline phase, though slightly worsening the accuracy of the found reduced space. As an example, in [50] there is an application of the Greedy algorithm in uncertainty quantification problems, in [1] for Navier–Stokes problems or in [13] for Euler equations.

For more complicated problems, where these techniques do not achieve enough accurate results, recent nonlinear tools can be used to still catch the underlying reduced latent space. One of the many techniques that can be used to this end is the autoencoder neural network [17, 28, 42]. These networks are able to obtain very small reduced spaces even when the solutions cannot be well represented by a linear combination of basis functions. Once the reduced space has been found, the reduced order model can be obtained with different techniques.

In case of linear problems with affine dependence on the parameters, a simple Galerkin projection onto the reduced space can guarantee very accurate results consuming much less energy [7, 20, 46, 51]. When there is the presence of nonlinearities, further reduction techniques (hyperreduction) can be used to recast the problem into a linear one. Among these techniques, it is worth mentioning the empirical interpolation method [9, 20, 40, 51], the empirical quadrature method [33, 55] and Gappy POD [34, 53]. These techniques aim at reducing the computations of nonlinear terms, through the evaluation of only a few points in the domain, saving, again, energy consumption. More recent techniques have been developed to solve these nonlinear problems in less intrusive ways. A broad class of neural networks has been tailored to solve such problems [17, 21, 27, 35, 42, 56] as well as the dynamic mode decomposition (DMD) [3, 14, 16, 19, 37]. The common denominator of all these techniques is the ability to strongly reduce the computational cost and the energy consumption in the online phase after a learning procedure in the offline one.

Dimensionality Reduction

When one examines the main sources of the computational cost of a simulation, the dimensionality of the model parameters should not be omitted. In fact, the cost of some computations may grow exponentially with the increase in the number of parameters in the system. Therefore, the methods that obtain the estimates on how importantFootnote 1 the parameter of the model may drastically reduce the computational burden of the experiments.

One such method reduces the parameter space by unveiling the directions in the parameter space along which the model function has the greatest fluctuations. This is achieved by normalizing the inputs in a reference domain centered in the origin and then by rotating the parameter space until a lower-dimensional structure is identified [15].

Sensitivity Analysis (SA) can be used as well for identifying the most important parameters for the model results. However, SA methods can be highly computationally intensive by themselves. Alternatively, for the computational models that have some types of coupling stricture, some advanced techniques that adopt SA can be applied. Thus, in [32], the coupled structure of some multiscale models is exploited to perform SA on the less computational-intensive pieces such that the results are applicable for the dimension reduction of the overall model.

Reduction of Memory Storage

We go even further and suggest an additional reduction of the computational load of the reduced simulation by improving the storage system of the reduced model. In fact, the reduced order models have a significantly better performance in time, however, they can occupy large memory space and, thus, its sustainability decreases. There exist several approaches to address this issue, like the one presented in [29] where the memory storage of a Convolutional Neural Network was reduced by 90%. This reduction was obtained by replacing a finite set of the network layers with a response surface, involving dimensionality reduction techniques to operate on a low-dimensional space. The main idea of the approach is presented in Fig. 7.17.

Fig. 7.17
A flow diagram in two ways. 1. An image points to image recognition and object detection, then to the final output for final application in embedded systems. 2. An image points to the pre-model, reduction layer, input output mapping, and final output.

A reduced order approach for artificial neural networks (ANNs) applied to object recognition

Conclusions

This chapter mainly focused on the computational methods in achieving sustainable products. The surging growth of computational resources in the last two decades make it possible to simulate any actual system in the context of the digital twin. Digital twin in particular integrates data from various sources and process these data accordingly. Moreover, utilizing data smart asset solutions are a key to reduce operational costs.

This chapter, in such sense, divided into two sections; first a couple of industrial examples of utilization of computational methods in modelling a process or system was introduced. The section includes a vast number of examples in environment and pollution, life sciences and product life cycle optimization. Second part mainly focused on implementation of novel techniques of machine learning and artificial intelligence for model order reduction to predict the system solution.