Keywords

1 Introduction

If we observe from a physical and global point of view the contemporary evolution of human society on the surface of the earth, there is essentially a very rapid urbanization, accompanied by an equally rapid urban sprawl. Between 2000 and 2030, urban occupation of the land surface of the planet should have tripled [70], which already causes serious difficulties for food self-sufficiency, exposure to natural hazards (particularly floods) and health (air pollution, Urban Heat Island) [29].

Cities already account for more than half the world’s population, and soon the two-thirds; they also concentrate many industrial activities and transport. They are therefore necessarily of great importance in the current climate change, especially by their CO\(_{2}\) production (up to 70 % of the greenhouse gases total emission [74]).

Climatologists meeting within the IPCC (Intergovernmental Panel on Climate Change) operate their digital models on the planet scale with a meshing of approximately 100 km per side. Even in the long term, it is difficult to imagine that the size of the mesh can drop below 20 km [41]. Therefore, urban structures do not appear. However, if we calculate the total heat production of urban areas on the East Coast of the United States, on the one hand, and on the East Coast of China on the other one, and if this production is injected into a climate model, especially in the jet stream in winter, we see that this contribution may explain the over-warming of permafrost, several thousands of miles away, that is, respectively, in northern Canada and Siberia [84].

In the present climate models, cities are like phantoms: we do not see them, but we can measure their effects, sometimes very far from them. In the last IPCC report, it appears that the many measures taken by thousands of cities (in the form of climate-energy policies) cannot be quantified [41]. To go further, it will be necessary to implement a multi-scale method where calculations as fine as necessary at the urban level make it possible to correctly set the climate models.

This chapter is divided into four parts. Section 2, in the form of a state of the art very concise but expanded to the main involved areas, will remind the knowledge and existing difficulties. Section 3 will indicate how, and in what order, a new framework can be built for the urban physics. Sections 4 and 5 will describe, respectively, the shortwave and the long wave models principal characteristics.

2 Urban Physics: A State of the Art

2.1 From Environmental Physics

By the late sixties, researchers began to apply the possibilities of numerical simulations to the study of energy, momentum and matter exchanges between soil and atmosphere, and by that time, T.R. Oke applied these energy balances to specific environments: the cities [59]. He identified and sought to prioritize the physical phenomena contributing to the production of the Urban Heat Island (mainly, increased net radiation and absorption of heat by urban surfaces [66]).

We can therefore speak about urban physics, as a particular field of environmental physics [7, 52].

However, at that time, the resources did not allow simulating complex geometric patterns, and measures were limited in quality, time and space. Despite substantial advances in satellite imagery, ground-based measurements are still required and are still limited to periods of a few months in some parts of the city (cost, difficulty).

Regarding the geometry, researches are oriented in two directions: the urban canyon study (simplified street), or use of regular and extruded shapes (grid plan). At this point, despite some very encouraging intermediate results, it is not possible to accurately quantify the exchanges at the urban scale, nor to propose substantial guidelines for urban planning.

2.2 From Urban Planning

Urban planning was developed between the 1850s and 1950s, by G.E. Haussmann in Paris (bring to all people the air and the sun) and I. Cerdà in Barcelona (the inventor of the word “urbanism”) to the figure of Le Corbusier (Macià plan in Barcelona, Chandigarh, Brasilia ...), first for hygienist concerns, then for urban comfort ones.

Very large buildings made possible by the invention of the elevator have created extremely dense downtowns, introducing the economic problem of the depreciation of land always in the shade (Manhattan and Chicago, then major Asian cities). Sunshine, solar right and sky rights were the main tools for planners to think in 3D [7].

We can then really speak of an intermediate scale, the neighborhood and the city, which we call “meso”, between the “macro” scale of geographers and climatologists and the “micro” scale of architects.

In Barcelona, I. Cerdà turns the grid pattern of the Eixample 45\(^{\circ }\) with respect to the north-south axis, to better distribute sunlight on the facades. However, in the Mediterranean climate, the delicate point is the cross ventilation, which can avoid the summer air conditioning. This problem can be treated qualitatively (introduction of light wells), but, as it should be studied at the level of each building, it is impossible to quantify globally and therefore to validate the insight of Cerdà. In addition, urban life has changed since then, and has become much noisier. If people have to close their windows at night for acoustic reasons, the picture is completely changed. The emergence of these multi-physics and multi-scale problems leads urban planners in a dead end, and Environmental Physics, which is not adapted to such fine scales, cannot support them.

2.3 From Building Physics

In the second half of the twentieth century, architects and HVAC engineers met around building energy efficiency problems and in the wider context of bioclimatic architecture [61]. Indeed, engineers introduce a scale much finer than that of the building, the equipment and devices one (windows carpentry, heat pumps ...), which we call “nano”, for which they use finite-element-like computational methods [46]. However, at the micro level, these methods give way to nodal ones and geometry disappears, breaking dialogue with architects.

For decades, simulations have been limited to the study of single buildings. The scaling up to the urban block or neighborhood, made necessary by the hardening of the thermal regulations, is very difficult. Because of its too specific methods, Building Physics has as strong difficulty in getting, through the city, the macro scale, as well as Environmental Physics has to arrive at the micro scale.

2.4 From Smart City

The application of internet technology to more physical networks (smart grid), particularly urban ones (smart city), generally presupposes simplification of the geometry into topology. In doing so, many researchers in this field emphasize the “urban software” (distribution of electricity, water and transport) with respect to its “hardware” (buildings and infrastructure). This leads to privilege active systems over passive ones, thus optimizing margins rather than main topics, in the short term rather than in the long time.

However, the smart city framework is the first one that focuses on the urban scale and reaches both the nano scale (equipment) and the macro scale (for example, the national grid). It also generalizes the idea of providing permanent sensors to the city.

3 Urban Physics: A New Framework

3.1 The City as an Interface

We must first justify the nomenclature we have proposed for the scales, which is quite different from that of environmental physics. Nano (equipment, the meter range and below), micro (buildings, 10 m), meso (town, hundreds of meters, kilometers) and macro (tens and hundreds of kilometers) match with different regulatory frameworks and different actors, but also bring very different time scales. Indeed, the facilities have lifetimes of fifteen to twenty years, the buildings of the order of the century, and cities, although their development is accelerating, have often millennial layouts.

In a multi-scale analysis, the city becomes an interface between the buildings and the land. Exchange parameters are, for example, albedo, surface temperatures. An essential element is to preserve the geometry; otherwise it is not possible to understand the urban structure and to act on it or on its elements. The great recent advance is geometric: construction by procedural methods, adaptive level of detail...[20]

Zooming in the city, one sees the various buildings with their windows. The window is the primary interface between the outside and inside of the building, and the only one concerned by daylight. Windows are transparent to visible light (between 400 and 700 nm), but not for the thermal infrared: it is the greenhouse effect, which has very important consequences for the building thermal behavior. Special devices, such as glazed balconies or Trombe walls [22, 69], allow enjoying this effect at the level of an interior. Windows are generally quite complex devices, with balconies, shutters, curtains ...that achieve a desired balance between solar gain, protection against excessive inputs, and reduction of thermal losses (insulation). The configuration of this interface may vary during the day (curtains) and in the year (mobile protection).

Such characteristics can be correlated to the multiscale study of composite materials [1, 44] dealing with domain decomposition methods, multi-scale and parallel simulations.

3.2 Multiband Aspects of the Radiation Interacting with the Cities

A black body is both perfect receiver and consequently perfect emitter. In 1900, Max Planck postulated that the electromagnetic energy of a black body is emitted not continuously (like by vibrating oscillators), but by discrete portions or quanta.

Planck’s law states that the spectral radiance or radiance per unit wavelength interval \(L_{\lambda }\), expressed in W m\(^{-3}\)s r\(^{-1}\) is given by:

$$\begin{aligned} L_{\Omega \lambda } \left( {\lambda ,T} \right) =\frac{2hc^{2}}{\lambda ^{5}}\frac{1}{e^{\frac{hc}{k\lambda T}}-1} \end{aligned}$$
(1)

In this relation, T is the temperature expressed in K and \(\lambda \), the wavelength expressed in m. This distribution can also be expressed in terms of frequency, but it is mandatory to express the transformation in terms of energy [72]. Let assume that the new function is \(L_{\varOmega \nu }\) which depends on \(\nu \) and T. First, we write the equality:

$$\begin{aligned} L_{\Omega \lambda } \left( {\lambda ,T} \right) d\lambda =L_{\Omega \nu } \left( {\nu ,T} \right) d\nu \end{aligned}$$
(2)

As \(\nu = \mathrm{c} /\lambda \), \(d\lambda / d \nu = - c /\nu ^{2}\). For the next step, we can remove the negative sign which is simply clarifying that increasing wavelengths correspond to decreasing frequencies. Then

$$\begin{aligned} L_{\Omega \nu } \left( {\nu ,T} \right) d\nu =L_{\Omega \lambda } \left( {\lambda ,T} \right) \frac{d\lambda }{d\nu }=L_{\Omega \lambda } \left( {\lambda ,T} \right) \frac{c}{\nu ^{2}} \end{aligned}$$
(3)

And finally:

$$\begin{aligned} L_{\Omega \nu } \left( {\nu ,T} \right) =\frac{2h\hbox { }\nu ^{3}}{c^{2}}\frac{1}{e^{\frac{h\nu }{kT}}-1} \end{aligned}$$
(4)

which is obviously expressed in W m\(^{-2}\) s r\(^{-1}\)s\(^{-1}\)

Three fundamental physical constants are present in these formula:

  • Planck’s constant: h = 6,62606957\(\,\times \,\)10\(^{-34}\) J s

  • Velocity of light: c = 299792458 m s\(^{-1}\)

  • Boltzmann constant: k = 7,3806488\(\,\times \,\)10\(^{-23}\) J K\(^{-1}\)

Wien’s law states that \(\lambda _{max}T = 2.898 \times 10^{-3}\)

We can check it for the curves of Fig. 1, i.e. for the Sun temperature \(T = 5780\) K, we obtain: \(\lambda _{max} = 5.013 \times 10^{-7}\) m (\(\approx \)500 nm).

Fig. 1
figure 1

Spectral radiance in W m\(^{-3}\)s r\(^{-1 }\) as a function of the wavelength in nanometers

The same data can also be expressed in terms of the frequency expressed in hertz (s\(^{-1})\). They are shown in Fig. 2. According to the interpretation of (2), the maximum values shown on (Fig. 1) cannot be transposed directly in Fig. 2.

Fig. 2
figure 2

Spectral radiance in W m\(^{-2}\)s r\(^{-1}\) s\(^{-1 }\)as a function of the frequency in hertz

If we try to get the same results for the two temperatures of Sun and Earth, we obtain two curves with very different scales, which imposes to multiply the second one by some factor (10\(^{6}\) in the situation of Fig. 3).

Fig. 3
figure 3

Spectral radiance in W m\(^{-3}\)s r\(^{-1 }\) as a function of the wavelength in nanometers of Sun and Earth radiations

If we replace the decimal abscise by a decimal logarithmic one, the result is more compact and more understandable (Fig. 4).

Fig. 4
figure 4

Spectral radiance in W m\(^{-3}\)s r\(^{-1}\) as a function of the decimal logarithm of the wavelength

As observed in (Figs. 3 and 4), the intersection of the continuous and the dashed curves occurs at \(\approx \)\(\upmu \)m. This corresponds to the intersection point when the amplification factor of the dashed curve is equal to 30,000 (log\(_{10}(4 \times 10^{-6}) = -5.3979\)).

As the black bodies spectra of \(\approx \)6000 K (Sun) and \(\approx \)300 K (Earth) are separated, it is possible to uncouple the corresponding radiations.

If one accepts the assumption of diffuse reflection essentially, it can be considered that each surface is characterized by a single parameter: the reflection coefficient. If one is interested in solar gains, this coefficient should be an average over the complete solar spectrum (between 0.32 and 4 \(\upmu \)m), but its assessment is often restricted to the visible spectrum (from 0.4 to 0.7 \(\upmu \)m) and weighted by the sensitivity curve of the human eye in daylight vision. For color images, the latter coefficient is split into three values (corresponding to the sensitivities of the RGB cones). The optical properties of the scene surfaces are first simplified (perfectly diffuse reflection) and then adapted to a particular receptor, the diurnal human vision. However, other receptors may be considered: plants (photosynthesis also relates to the band 0.4–0.7 \(\upmu \)m, but with a very different sensitivity curve having its maximum at extreme red and purple, and a minimum at the center of the strip, the green color is generally reflected by the plant), photovoltaic cells (whose sensitivity extends into the near infrared, beyond micron) ...

In the last decades important advances in the terrestrial and satellite measurement of solar radiation, as hyperspectral remote sensing [36, 37, 63], presage greater spectral accuracy in weather data and, therefore, greater rigor in the optical characterization of urban surfaces.

3.3 Shortwave

Because shortwaves and long waves are quite perfectly separated, the distribution of the shortwave on urban geometry does not depend on temperature. At the macro level, we only have data—meteorological ones (solar paths and clouds) and orographic ones (mountain masks—, now available all over the world. On the meso scale, the geometric model must be detailed enough (roofs slope, facades with their windows and balconies), but it has a reduced semantic of reflection coefficients. At the micro level, we go through the windows and the physical and geometric properties are maintained. On the nano scale, we can define the sensors (for example, computer monitors for the study of glare in natural light).

The simplicity of shortwave treatments makes them the necessary starting point to build a framework for multiscale analysis [5]. Shortwave analysis has the twofold advantage of a large experience acquired in the fields of entertainment and of recovering previously known principles of planners with full scale “test cases”. We are presently ready to achieve shape optimization in this framework.

At the city level or at least at the district one, the first steps consist in establishing the possible objective functions, the constraints and the design parameters [10]. The main difficulty is that the general problem is typically formulated in terms of discrete variables and that the sensitivity analysis is not reachable, because the involved functions are mostly not derivable. Evolution algorithms are good candidates for this kind of optimization and have proven their effectiveness [43, 78].

Some improvements have been achieved with respect to the boundary and initial conditions in order to solve a problem closely related to the direct solar irradiation [79].

3.4 Long Waves

The next step is to go into long waves. One can imagine a city under a static atmosphere (only acting as a filter for the radiation) without inhabitants. Thermography has brought infrared in our visual experience, and it seems that we are now able to refine urban planning criteria on a better consideration of long waves (urban climate, urban comfort) [30]. The geometrical parameters calculated for the shortwave are also used for long waves (view factors), but with the consideration of temperatures, we should now at least study the radiative-conductive coupling [8]. The transition of current nodal methods to finite element ones [47, 62] has not been met yet, but the multi-scale analysis could provide an important argument in this direction (same method at all scales).

The Stefan-Boltzmann law states that the total energy (also known as irradiance or emissive power) radiated per unit surface of a black body per unit time is proportional to the fourth power of the black body thermodynamic temperature.

The total radiant energy emitted by a unit area of a blackbody radiator is obtained by integrating (1) over all wavelengths. The result is the Stefan-Boltzmann law:

$$\begin{aligned} Q=\sigma T^{4} \end{aligned}$$
(5)

In this expression, Q is measured in W m\(^{-2}\), T is the Kelvin temperature, and \(\sigma \) is the Stefan-Boltzmann constant (5.670373 \(\times 10^{-8}\) W m\(^{-2}\) K\(^{-4})\). It is linked to the universal constants by the relation:

$$\begin{aligned} \sigma =\frac{2\pi ^{5}k^{4}}{15h^{3}c^{2}} \end{aligned}$$
(6)

Expressed as a function of the difference of temperatures \(\varDelta T = T_{r}-T_{i}\), with the reference temperature \(T_{i}\), this law can be approximated by a linear relation [8, 51] so that, for instance, between 10 and 30 \({}^{\circ }\)C, the approximate solution built around 20 \({}^{\circ }\)C is giving a heat flux with less than 5 % error in the 20\(^\circ \) interval around. For instance, between two infinite walls, the first one at 293 K, the flux increment for each degree more or less on the other wall is reaching \(Q_{293} = 6\) W m\(^{-2}\) K\(^{-1}\) (Fig. 5).

Fig. 5
figure 5

Heat flux from wall at 293 K to wall at abscise temperature. Max and min error: 10.3 and \(-9.7\) %

Between a source at 279 K and a receptor at 0 K, the flux is equal to 343.6 W m\(^{-2}\). Note that the average Sun irradiance [7] at the top atmosphere (or without atmosphere on the ground) is of 342 W m\(^{-2}\). The temperature of 279 K should be the Earth’s temperature originating longwave radiations able to balance Sun’s radiations in the lack of atmosphere.

4 Computational Model

The solution of radiative exchange problems is based either on ray tracing methods [39, 82] and their many variants, either on radiosity methods. The former are widely used in rendering while the latter were initially introduced in heat transfer problems [34].

Radiosity methods have the advantage of addressing the problem of radiative exchange for the entire scene. They proceed in two steps:

  1. 1.

    Calculation of the view factors

  2. 2.

    Solution of the radiosity equations

There is a clear separation between the pure geometrical step and the radiative calculations. The positive consequence is that the setting of the radiative problem is completely independent and can be modified retrospectively and inexpensively.

4.1 The Simplest Model

Given a very large urban 3D model, consisting of tens or hundreds of thousands of facets, we want to mesh most of these faces and to perform some calculations on the created meshes. The simplest calculations concern solid angles and view factors, or ultimately direct sunlight hours [9]. The most efficient solutions are using projections, mainly the stereographic one [13].

Let, for instance, calculate on the points of a virtual surface (a section of the city) the solid angle corresponding to the sky: the SSA (Sky Solid Angle) [25]. The sky contributes from all directions above the horizon that are not hidden by elements of the scene, and all these directions have equal weight.

If the same calculation is performed on real surfaces, and therefore opaque ones—ground of the street, facades, roofs—the geometric dimension that has a physical sense is the sky view factor (SVF), [60] which takes into account the fact that the grazing directions contribute less than the normal directions and only the directions whose scalar product with the normal negative contribution (an actual surface is necessarily oriented). The SVF is directly related to the diffuse or Lambert reflection [45].

Fig. 6
figure 6

Equal area disks defined on the spherical surface, so, with the same solid angles (SSA)

Generally, SSA and SVF are expressed in percent, but while SSA is reported to the hemisphere of the sky, the SVF is related to the disk resulting from the orthogonal projection of the hemisphere on the plane containing the studied point. It is known as the Nusselt analogy [58]. Thus, in some configurations, SVF can be higher than SSA. For instance, the SSA of a spherical cap of opening \(\alpha \) located on the top of the hemisphere is always greater than its SVF. Indeed, it is equal to 2\(\pi (1- {\cos }\alpha \))/2\(\pi = (1- \cos \alpha \)) while its SVF is equal to \(\pi \sin ^{2}\alpha / \pi = \sin ^{2}\alpha \). Their ratio is then equal to \(\sin ^{2}\alpha / (1 - \cos \alpha ) = (1-\cos ^{2}\alpha ) / (1-\cos \alpha ) = (1+\cos \alpha )\) varying from 2 for a very small cap to 1 for the hemispherical cap. When \(\alpha \) is small, the cap SSA tends to \(\alpha ^{2}/2\) while its SVF tends to \(\alpha ^{2}\). Thus, (SVF/SSA) \(_{\alpha \rightarrow 0}\) = 2.

SSA and SVF depend only on the geometry of the scene and on the concept of horizontality (to define the sky vault). The sunshine hours add the notion of cardinal points (north direction), latitude and period of year, to set the solar paths. Most often, the sunshine hours are calculated on solstice’s extreme days and the equinox’s average days.

Figures 6 and 7 are showing a lot of disks located on two orthogonal meridians of a hemisphere. All the disks have the same area and thus the same SSA. The top disk of the hemisphere is seen in true scale in the center of Fig. 7. For the SSA, its cap area (very close to the disk area if it is small enough) is reported to the hemisphere area while, for the SVF, the corresponding disk area is reported to the base disk area validating the results presented above.

Fig. 7
figure 7

SVF of the 17 equal area or equal solid angle spherical caps

Cities rarely have clear boundaries. More or less remote mountainous areas can greatly reduce the visibility of the sky, and thus the availability of the sun. With the exception of sea harbors or extremely flat regions, the horizon is rarely visible from the city, and only from the highest buildings.

In practice, we only manage a portion of the urban model, and the division is more or less arbitrary (e.g., according to administrative boundaries). Everything else can be projected on a cylinder centered on the studied area, in order to have a correct skyline. If the studied area is moving within the model, it is necessary to check that the encompassing cylinder remains accurate (parallax problem) [27].

Looking at the scene so organized from different points of the mesh, very different results are obtained. From the ground and the bottom of the facades, we often see very few objects, but close, while from the top of the facades, one can have a panoramic view of nearly half of the model. Numerous surfaces then appear with small view factors. From the roofs, the perceived scene is very different depending on whether the roof is tilted or horizontal. In the latter case, only are visible the upper portions of buildings higher than the considered one.

It is therefore natural to introduce technics of adaptive details in order to fully take into account these perceived scene changes, which can be very sudden, even browsing the mesh of a single flat surface, because the scene is consisting of discrete elements with well-defined edges.

The simplest city models are extrusions of urban maps. Because the radiation incident on a façade is divided into two parts (one reaches the wall, the other enters through the window), it is accounted from the glazing rate. This simplification is working well with the philosophy of the nodal methods, which tends to simplify the geometry to the maximum (e.g., a building is only represented by two nodes, one for the envelope and one for inside).

Fig. 8
figure 8

Scheme of the modelling of individual buildings

To improve the model [12], we can consider the windows as additions in front of the facades (Fig. 8). The advantage is that it avoids additional Delaunay mesh generation and that it keeps the model very simple. Thus, in (Fig. 9) all the large areas are correctly oriented (including the roofs) and the windows are present, with a final model which remains below 20,000 triangles.

It has been shown that it is impossible to take into account the thickness of the walls afterwards. To do this, we must request procedural methods and adaptive level of detail.

Fig. 9
figure 9

Model of the Compiègne central district

So, the idea of the pinhole [31] becomes very interesting, because it condenses incoming information on the window itself, and allows optimizing the shape of the window on the inner illumination criteria without restarting the computation at each step.

4.2 View Factors

In the solution of radiosity equations, the heaviest part is the computation of the coefficients of the matrix constituting the system. Indeed, the number of coefficients is potentially very high (square of the number of elements) and each one involves the treatment of the visible surface detection.

$$\begin{aligned} F_{ij} =\frac{1}{A_i }\int \limits _{A_i } {\int \limits _{A_j } {\frac{\cos \theta _i \cos \theta _j }{\pi r^{2}}} } V(Y_i ,Y_j )dA_i dA_j \end{aligned}$$
(7)

The view factor (also called form factor, angle factor or configuration factor) is the basic ingredient of radiative heat transfer studies [20, 71]. It defines the fraction of the total power leaving patch \(A_{i}\) that is received by patch \(A_{j}\). Its definition is purely geometric. The angles \(\theta _{i}\) and \(\theta _{j}\) relate to the directions of the vector connecting the differential elements with the vectors normal to these elements; r is the distance between the differential elements.

Except in particular situations, it is not possible to compute the view factors explicitly [40]. An additional difficulty appears in presence of obstructions represented in the above expression by the visibility function V (\(X_{i}\), \(Y_{j})\). This function is equal to 0 or 1 according to the possible presence of an obstacle that does not allow seeing an element \(Y_{i}\) from an element \(Y_{j}\).

It is much easier to compute the differential view factor by removing the external integration that will be taken into account only in a second step to achieve the evaluation of the view factor, using, for instance, the Gauss integration rule in the concerned patch. The differential view factor in a point surrounded by the element of area dS is given by:

$$\begin{aligned} F_{dS-A_j } =\int \limits _{A_j } {\frac{\cos \theta _i \hbox { }\cos \theta _j }{\pi r^{2}}V(Y_i ,Y_j )} \hbox { }dA_j \end{aligned}$$
(8)

If the visibility function is everywhere equal to 1, the integration (4) performed on the full hemisphere is giving a view factor equal to 1. Spherical projections combined with Nusselt analogy provide an efficient solution of this problem [13].

4.3 Radiosity Equations

In order to solve efficiently the interaction problem, it is usual to set up a discrete formulation derived from the global illumination equation by making the following assumption. The environment is a collection of a finite number N of small diffusively reflecting patches each one, with uniform radiosity [15, 71]. A didactic approach of the radiosity equations solution was presented in [6], which includes the interpretation of importance, the dual of radiosity, obtained by the solution of the adjoint problem and is able to initiate new developments in discretization error analysis.

Radiosity is the radiometric quantity that is best suited for quantifying the illumination in a diffuse scene. In practice, when there is one single problem to solve, iterative solutions are used, which require the treatment of only one line of the matrix per iteration (see Sect. 4.4).

If the process is dynamic, for instance due to the movement of the Sun and the varying configurations of the sky [55], it is convenient to mesh the whole sky and to give to each element of its mesh the emittance corresponding to the concerned situation.

In this situation, it is more efficient to use the technique of combination of unitary right members [18]. It means that all the components of a column of the right members are zero except one which is equal to one. This system is solved for as many unitary right members as elements in the sky vault mesh, for instance, 145 cells in the Tregenza dome [73] and more if necessary (Fig. 10).

The creation of this kind of dome is very simple, because it is based on the two classical geographical coordinates (latitude and longitude). This choice facilitates the positioning and the navigation in the mesh [14]. Its definition is given by the sequence of numbers of elements in each ring. From these data, it is easy to compute the partition of a disk into equal area elements. The geometric transformation between a hemisphere and its equal area projection allows projecting the disk elements on the sphere either by using equal area projection SSA (Fig. 10) or equal view factor projection SVF (Fig. 11). The orthogonal projection of the dome of (Fig. 10) is shown in (Fig. 13) while the orthogonal projection of the dome of (Fig. 11) corresponds to Fig. 12.

Fig. 10
figure 10

Hemisphere composed of 289 equal area cells

Fig. 11
figure 11

Hemisphere composed of 289 equal view factor cells

Fig. 12
figure 12

Equal area cells in the base disk

Fig. 13
figure 13

Equatorial projection of the 289 equal solid angle cells

After solving the radiosity equations, it is sufficient to recombine the solutions for each particular situation for which it is possible to evaluate the right member. The consequence is that the computation of the radiosities is very cheap.

Let us now define R, the diagonal matrix containing the hemispherical diffuse reflectances.

$$\begin{aligned} R_{ij} =\rho _i \delta _{ij} \end{aligned}$$
(9)

When the patches are planar polygons, the terms \(F_{ii}\) are equal to zero. These coefficients also verify the closure property when the environment (scene and sky), is taken into account:

$$\begin{aligned} F=\left( {{\begin{array}{llll} {F_{11} }&{} {F_{12} }&{} \cdots &{} {F_{1N} } \\ {F_{21} }&{} {F_{22} }&{} &{} \vdots \\ \vdots &{} &{} &{} \vdots \\ {F_{N1} }&{} \cdots &{} \cdots &{} {F_{NN} } \\ \end{array} }} \right) \end{aligned}$$
(10)

Let denote F the matrix of view factors coefficients between patches i and j as computed in (3):

$$\begin{aligned} \sum _{i=1}^N {F_{ij} } =1{ ; }\quad i=1,N \end{aligned}$$
(11)

In the next formula, the components \(B_{i}\) of vector B are the radiosities, or, radiant fluxes per unit area, on patch i while the components \(E_{i}\) of E, are the radiant exitances. The radiosity equations can be written:

$$\begin{aligned} (I-RF)B=\left( {I-G} \right) B=MB=E \end{aligned}$$
(12)

This discrete formulation leads to a linear system of equations for which many algorithms are available. The RF matrix, formed by the products of the view factors by the reflectances, is a non-symmetric matrix (except if all the reflectances and patch areas are equal), but the radiosity matrix M is diagonally dominant and well-conditioned.

In order to integrate the radiosity method in the environment of finite element method [35], it is suitable to work with symmetric matrices [56, 57].

The equation structure allows introducing another important property of the radiative exchanges: the principle or reciprocity

$$\begin{aligned} \forall (i,j)\hbox { : }A_i F_{ij} =A_j F_{ji} \end{aligned}$$
(13)

We rewrite (12) explicitly and divide each line i by \(A_{i}/\rho _{i}\)

$$\begin{aligned} \frac{A_i }{\rho _i }B_i -A_i \sum _{k=1}^n {B_k F_{ik} } =\frac{A_i }{\rho _i }E_i \end{aligned}$$
(14)

In pure diffuse reflection, this relation expresses the energy transfers between the N elements of the scene. If we use the reciprocity relation, we can transform (12) by multiplying the view factor matrix F by the diagonal matrix \(S_{ij}=A_{i}\delta _{ij}\) of the patch areas.

We obtain then a symmetric matrix with N (N+1)/2 elements.

$$\begin{aligned} SF=\left[ {{\begin{array}{llll} 0&{} {A_1 F_{12} }&{} {A_1 F_{13} }&{} \cdots \\ {A_2 F_{21} }&{} 0&{} {A_2 F_{23} }&{} \cdots \\ {A_3 F_{31} }&{} {A_3 F_{32} }&{} 0&{} \cdots \\ \vdots &{} \vdots &{} \vdots &{} \ddots \\ \end{array} }} \right] \end{aligned}$$
(15)

Then, multiplying (12) by SR \(^{-1}\), we can write:

$$\begin{aligned} \left( {SR^{-1}-SF} \right) B=SR^{-1}E\hbox { } \end{aligned}$$
(16)

And in symmetrical form:

$$\begin{aligned} S\left( {R^{-1}-F} \right) B=SR^{-1}E\hbox { }\rightarrow \hbox { }B=\left( {R^{-1}-F} \right) ^{-1}R^{-1}E \end{aligned}$$
(17)

The second member SR \(^{-1}E\) represents the incident power on the patch [3]. To solve this system of linear equations, a lot of very efficient methods are available. The Cholesky one [75] is very well known in the field of finite element method. We have good feedback for problems with more than one million of degrees of freedom. For thousands of degrees of freedom, it works very well on PCs.

In each line i of matrices F or SF, the nonzero terms indicate what elements are visible from element i. So, we can build an incidence matrix L composed of integers, which gives the connections between all the elements of the scene. It will help us to manage the system of equations and to identify possible ways to condense the system of equations.

Despite the fact that the heaviest part of the computation time is the evaluation of matrix F, we can also try to accelerate the step of solution by using iterative methods as explained in the next section.

4.4 Neumann Series

Because the matrix G = RF, defined in (11), has a norm less than one, the matrix M is invertible and the Neumann series of successive multiplications of G converges to its inverse [20, 81, 85].

$$\begin{aligned} f\hbox { }\left\| G \right\| <1\hbox { }then\hbox { }M^{-1}=\left[ {I-G} \right] ^{-1}=\sum _{a=0}^\infty {G^{a}} \end{aligned}$$
(18)

This property gives indications to develop very efficient methods to solve these equations. It also gives justifications for iterative solutions. As noted by several authors [2, 26, 42], each step of the iterative process can be interpreted as the introduction of an additional reflection on all the elements of the scene.

The ability to decompose the solution of the radiosity equation in orders of reflection is very interesting, because it allows comparing this method with the ray tracing one, where the order of reflections is a usual stopping criterion. Therefore, the calculation is often stopped at the second reflection. This is true in ray trace software as Radiance [28], but it is also the case for a radiosity solver like V-Ray software (http://www.chaosgroup.com). In the latter, we can choose one or two reflections.

In a city, multiple reflections are possible, for instance between facades of narrow streets. Considering an average reflectance of 20 %, the energy flow is not superior to 20 % after the first reflection, 4 % after the second one and less than 1 % after the third one. However, if someone is interested in local results, the overall reasoning can be confusing, because the reflected energy may be the only available on certain surfaces, where it takes a considerable importance.

In an inner space, the radiation from the Sun and the sky through the window illuminates largely the floor and part of the walls, but it leaves the ceiling in full shade. The first reflection on the ground is the one that illuminates the ceiling. As it is generally light in color, the ceiling returns a second non-negligible reflection to the ground. This light is the first to reach parts of the ground from where the sky is not visible. Two reflections are therefore needed to get a realistic rendering of an interior space in natural light.

But what happens in an outdoor scene? In an urban scene, because we can almost always see a bit of the sky, the second reflection does not represent a substantial change in the results, and the following ones can be ignored (except in very specific configurations, as for example the entrance of a tunnel).

Modern cities all share some essential characteristics: a network of streets delineates parcels built with heights ranging from a few meters to tens of meters. However, other features are highly variable. This is the case of the coatings optical properties. Facades can be dark (brick) or light (limed walls), with a rate of glazing (and so, specular reflection) from few percent to almost 100 % (towers of glass and steel).

An important parameter of environmental physics is the albedo. This is an average reflection coefficient over a very large area. For instance, we can refer to the albedo of a planet (the Earth albedo is about 30 %, [7]). The albedo of sea ice, ocean, desert or forest is fairly easy to assess. Today, while cities cover large parts of the land area, it is necessary to know their albedo. However, the semi-regular structure of cities gives highly variable albedo. The relationship between apparently light and dark surfaces also depends on building height and density of the neighborhood.

Another characteristic of urban settings, due to the fact that cities are relatively low and very spread out, is that what we can see from a given point is very variable. From a window on a ground floor, the view can be limited to only two surfaces: the street and the facing wall. From a window at the top of a tower, we can see dozens, even hundreds of buildings. Calculating an urban geometry therefore strongly motivates to play on the buildings level of detail.

Distant buildings can be replaced by their prismatic envelopes. This kind of procedure has been used for a long time to accelerate the detection of visible surface. Several options are available; since bounding boxes [32] to prismatic envelopes and convex bounding polyhedrons.

5 Coupling Short and Long Waves in Transient Situations

The shortwave radiative exchanges limited to diffuse reflections can be calculated on the basis of the radiosity alone. More complete treatments, including the transient heat transfer in solids and the possible inclusion of the atmosphere, require more sophisticated methods. Starting from the classical equation of heat conduction in a solid:

$$\begin{aligned} div\left[ {k\hbox { }grad\hbox { }T} \right] +Q=\gamma c_v \frac{\partial T}{\partial t} \end{aligned}$$
(19)

Q is the heat density (W m\(^{-3}\)), T the temperature measured in degrees Kelvin, k the thermal conductivity (W m\(^{-1}\) K\(^{-1})\), t is the time, \(c_{v}\), the specific heat (J kg\(^{-1}\) K\(^{-1}\)) and \(\gamma \), the density (kg m\(^{-3})\). The variable conjugated with the temperature is the heat flux linked to the temperature gradient by Fourier’s law.

To discretize these equations, the usual technique is the nodal method [50], used for instance in Esarad [64]. This technique also known as “Lumped Parameter Method” [65, 68, 78] offers a number of advantages among which we note that it highlights the thermal balance and heat flux. The fundamental assumption of the method is the use of isothermal nodes arranged in a network where they are connected by resistances and capacitances (electrical analogy). Its main drawback is that it requires a step of idealization from the geometrical definition of the model (CAD step) and the definition of the calculation model. For small models, it can be very useful, because it gives a summary of the exchanges. However, it offers only a coarse representation of the temperatures distribution.

An alternative is the finite elements method, originally developed in mechanical and civil engineering. In this method, the domain is covered with a congruent mesh. In each element, the field is replaced by a polynomial approximation respecting the required field continuity conditions through the interfaces (borders with neighboring elements). The border area consists of a boundary layer through which the exchanges occur with the fluid (here, the atmosphere). Through this boundary are also occurring radiative exchanges with the outside or with other elements of the scene.

In the thermal problem, the temperature field is discretized, so that the result of the simulation is a temperature map “painted” on the skin of the solid.

The boundary conditions consist of Dirichlet or essential conditions where the temperature T is imposed, natural or Neumann conditions where the heat flux is imposed, and Robin conditions, which are a weighted combination of Dirichlet and Neumann boundary conditions. These three zones cannot overlap and their union should be the total boundary.

The loads are of different natures:

  • Heat flow from the shortwave solar radiation. It is calculated separately in the “radiosity” module;

  • Long wave radiative fluxes are travelling towards other elements of the scene or to the atmosphere. They are proportional to the difference of fourth power of temperatures.

  • Convective flow proportional to the temperature difference between the surface of the solid and a reference point of the fluid in which it merges [30].

  • Any other heat flow that may be estimated directly or expressed in terms of temperatures, for example, evapotranspiration [53].

In brief, the solid subjected to these heat flows and where the temperature is known at least at one point will experience modifications as a result of internal heat conduction and the ability of materials to store heat. In finite element calculation, it is a classical problem, its theory was developed in the 1970s [33].

The time component is calculated by a finite difference method. At the level of discretization, we must ensure that the temporal pattern is consistent with the spatial discretization.

In this problem, the main difficulty is the calculation of the view factors of the surfaces brought into contact. It may be assumed they have been calculated in the previous step (shortwave). If they must apply for both analyses, both meshes have to coincide.

To calculate the convective exchanges, one must know the temperature of the air, which requires in principle to include modeling of the fluid.

The calculation of thermal interactions in the city encompasses three major phases:

  • the definition of the geometry, which must be structured and has to allow processing of very large volumes of data,

  • the view factors calculation, which involves the effective detection of hidden or viewed parts,

  • and finally, the solution of the equations of transient heat conduction in the coupled conduction-radiation problem.

The methods proposed to solve these problems are qualified, but it is still necessary to verify that the computation time is acceptable.

Treatment of massive geometric data takes advantage of advances in procedural methods and in “LOD” (methods of levels of detail) [20]. The calculation of view factors can take advantage of the progresses made in the Monte Carlo methods or in the effective treatment of hemispheric and stereographic projections.

For the solution of the coupled system, the choice of the finite element method is motivated by its ability to provide a temperature map that can be easily compared to telemetry results [76].

Today, the finite element method is widely used to solve nonlinear problems of millions of degrees of freedom and benefits from the attention of programmers who have optimized the algorithms. It may be accused of producing an enormous amount of results but the task of identifying the relevant information is reduced through visualization techniques. Use of optimization techniques and sensitivity calculation is another decisive tool to assist in the understanding and interpretation of the analyzed phenomena.

5.1 Improving the Performances of the Finite Element Solution Using Super Elements

The set of transient equations is linear with respect to conduction and convection, but not to radiation. In clear sky conditions, due to higher differences of temperatures (about \(-50\,{}^{\circ }\)C in the zenith direction, up to \(50\,{}^{\circ }\)C or more on the ground), the heat exchanges between sky and city are highly nonlinear. However, for cloudy skies, the difference can drop drastically [80] leading to an apparent sky temperature very close to the ambient one. It is then very interesting to condense in a super element the linear part of the model and to iterate on the degrees of freedom corresponding to the elements of the city participating at high level to the heat fluxes going to the sky, i.e. by selecting only the roofs or other elements of the scene [8]. The superelement technique is well known in the fields of structural mechanics and civil engineering [20]. It is extensively used in the modeling of large structures like full aircraft or oil platforms. For this problem, procedural methods will help to organize the data [19].

5.2 Other Aspects of the City Behavior Simulation

Another challenge is to consider the convection, which works well with the nodal methods [5]. Developments have been performed, dealing with this aspect or with the interaction of heat and fluid dynamics [21, 83]. The ventilation aspects could then be addressed in this framework, with a clear objective: to be able to carry everywhere—in the squares, in the bottom of streets and inside buildings—the measurements made at specific points of the city (on some roofs, near the airport) [24].

In general the problems of fluids structures or fluids solids interactions are still a major field of research and development in the frame of multiscale and multiphysic disciplines [38, 77].

Solar radiation reaches the Earth’s surface after passing through the atmospheric layer. It appears in different forms: direct, diffuse, reflected by the environment or other elements of the scene. Other phenomena also contribute, for example, the physical and chemical reactions that take place in the atmosphere and the phenomena induced by vegetation [23].

Finally we can follow the example of previous works concerned by large dimension problems [48], but wich will take benefits of the still increasing enhancements of the computers memory and processors.

6 Conclusion

A complete model of multi-scale energy exchanges should allow the following simulations: investigating major urban design options with their impact on the overall urban energy efficiency (which, among other things, allows a better evaluation of smart city type proposals); helping urban planning (optimization of urban forms based on energy conservation criteria); supplying atmospheric models at the macro scale.

Further developments should deal with the following steps:

  1. 1.

    A “shortwave” model for quickly achieving fast simulations on very large geometric models;

  2. 2.

    A “long wave” model to simulate thermography;

  3. 3.

    A “multiphysics” approach for finding optimal solutions adapted to the urban project;

  4. 4.

    A complete physical model coupling thermodynamics and aeraulics.

On the basis of an urban geometry model built on procedural principles with an adaptive level of detail, the shortwave model must be able to simulate the distribution of solar radiation from meteorological data, with time steps of a few minutes, and taking into account the reflections, in order to calculate the main thermal parameters (solar gain) and the luminous ones (Daylight Autonomy DA [67] and Useful Daylight Illuminance UDI [54]). The calculation time is an important issue for the three main anticipated applications: calculation of the urban albedo variation, urban design assistance and optimization of urban shapes (on criteria such as solar energy potential or photovoltaic solar access).

For the long wave model, calculation of surface temperatures can be achieved at a reasonable cost if we agree to simplify the convection contribution. Thermography is now present in our visual experience, and architects wish to use it soon as component of simulation in their projects. The many existing thermographic images, including at urban scale (above ground thermography performed using satellites, aircraft or drones), are altogether test cases available to calibrate the simulations. This objective requires leaving the nodal methods for Finite Element Methods.

Multiphysic studies should then focus on heat, light, acoustics and cross ventilation. To optimize the urban shapes, the last one is indeed much more available—and then efficient—than forced ventilation (wind). Therefore, these studies do not necessarily need very sophisticated simulations, but rather to follow a methodical order still to be explored.

Finally, the complete physical model should be able to take into account the heat-fluid coupling, with, as a primary objective, the transposition anywhere in the city of the data collected by weather stations. To achieve the quantification of the different contributions to urban climate, multiscale analysis and model reduction techniques will undoubtedly be necessary.

To optimize a city on criteria of comfort or energy efficiency, it is clear that prior understanding of urban climate and precise quantification of the different contributions to this climate are absolutely necessary. This is even truer in order to act on the air quality at the urban scale. Pioneering works have shown, first, that the relevant choice of a LoD (Level of Details) of the urban 3D model is essential [49] and, secondly, that any major action on the climate must be preceded by an analysis allowing to assess not only the effectiveness of the different possible actions, but also the order in which they should be carried out, otherwise unexpected—and possibly dangerous—results are obtained. Cities are systems, and a systemic approach is mandatory.

We believe that achieving the first three objectives is needed before we can work on the last one seriously. Another necessary condition is that the FEM community is interested in this subject. Raising this interest has been the main motivation of this chapter.