Keywords

1 Introduction

In the IR imaging simulation of the complex scene, the conditions of the targets, background and atmosphere are complex. The infrared imaging sensor creates the infrared radiation image by receiving the infrared radiation signals from the scene. The radiation of the atmosphere and the atmospheric attenuation of radiation transmission both have impacts on the infrared radiation image [1]. To create the imaging model of the complex scene, we should not only consider the characteristics of the physics, radiation of the targets and background, but also need to consider the complex characteristics of the environment, atmosphere and some other objects [2, 3]. This paper presents a new external rendering algorithm for IR imaging modeling of the complex infrared scene. We give a detailed description of the implementation of the external rendering and the various ways of establishing an infrared scene using the proposed algorithm.

2 The Proposed External Rendering Algorithm

The external rendering algorithm proposed by this paper is implemented by using the infrared environment model. This algorithm also needs the sensor objects and the emitter/reflector objects. The sensor object is mainly used to convert the received infrared signal into a displayable image signal. It needs the information of the targets and background provided by the infrared environment model. The target object represents the concrete instances of the targets existing in the simulation system, such as fighters, ground vehicles, missiles, clutters and so on. The infrared environment model is mainly used to respond to the requests sent by the sensor objects. It also manages and calculates the signal sent by the target objects. Ultimately it provides the accurate information to the sensor objects. The infrared environment model is in charge of the generation, transmission and collection of the infrared signals emitted by the targets and background.

When the infrared environment model receives the request messages from the sensors, it would firstly determine which objects are in the sensor’s field of view by calling the spatial service interface. Then it sends the request messages to all the objects in the sensor’s field of view to request their infrared signals. After receiving the infrared signals from all the objects, the infrared environment model synthesizes the signals in a suitable way, and applies the atmospheric effects by calling the atmosphere service interface. After all the processing, the infrared environment model returns the information of the images to the sensor. The communication flow and data flow of the whole infrared simulation system is shown in Fig. 1.

Fig. 1.
figure 1

Data communication flow chart of the proposed infrared simulation algorithm

2.1 The Segmentation and Aggregation of Targets

For the imaging of the complex infrared scene, the main deficiency of the external rendering is the modeling and processing of the optical interactions between the targets [4, 5]. So only with the reasonable segmentation and aggregation of the targets in the scene, we can create the synthetic infrared scene with sufficient fidelity to meet practical requirements [6]. There are many different ways to do the segmentation and aggregation of the targets for external rendering. We enumerate and analyses four main kinds of these methods.

  1. 1.

    Every Object a Separate Model

In this method, we regard each emitter or reflector in the scene as an independent model and treat each target as an independent emitter/reflector. This method has the lowest inherent fidelity, but it can satisfy the requirements of most IR imaging simulation applications. This method can’t model the shadow of the ground or targets caused by the other targets.

  1. 2.

    Targets as Separate Models and Other Objects as a Single Model

In this method, we divide the objects into two classes: target objects and non-target objects. The non-target objects are defined as the any objects except the targets, such as interference, clouds and background. We regard each target objects as a separate model, and all the other objects as a single model.

  1. 3.

    EO Critically Coupled Objects as a Single Model

In this method, we regard the critical coupling scene objects as a single model. It is necessary to build the correct model of the optical interactions of the separating objects in the space. If we aggregate these objects to be a single model, we should build the model of the optical interactions, because these optical interactions will affect the total radiation energy.

  1. 4.

    Entire Environment as a Single Model

In this method, we regard all the targets in the scene as a single model. The requirement of the fidelity for this method is very strict. This enables us to take any necessary heat and optical interaction into account when modeling the entire scene.

The Fig. 2 summarizes the generation mechanism of the infrared images. This figure shows us the kinds of infrared signals that can be aggregated by the four methods mentioned above. We can see that even in the worst case, most of the light sources can be modeled.

Fig. 2.
figure 2

The energy generation mechanism of the external rendering algorithm

2.2 The Synthesis of IR Images

The targets are always in the midst of a certain background. It is necessary to do the synthesis of the infrared radiation images of the targets and background [7]. Only in this way can we analyze the infrared radiation contrast of targets and background [8]. The main characteristic of the proposed external rendering is that it can synthesize the feature images of many objects in the right way when the atmospheric transmission effect of infrared radiation is taken into consideration. The information related to performing this function is as shown in Fig. 3.

Fig. 3.
figure 3

The schematic diagram of the rendering algorithm

After rendering the emitters and reflectors in the sensor’s field of view, the infrared environment model requests the feature images of these emitters and reflectors. Every emitter/reflector will transmit its own four rendering images or two-dimensional matrixes to the infrared environment model, including radiance image, transparency image, truth image and range image.

The radiance image is a pixel array which has a unit with photo flux rate of radiation. The transparency image is an image of the sensor’s field of view which is closed by the given emitters or reflectors. It is mainly used to consider the occlusion between the targets. The truth image is a coded image similar to the transparency image. It describes the types of the target objects which are founded in the sensor’s field of view. It is a kind of coding for the emitters/reflectors, and we can use it to distinguish all the objects in the imaging simulation system. The range image is an array of distance values. It describes the distance between the sensor objects and the emitters/reflectors. The infrared environment model uses these distance values to operate the atmospheric effect on each pixel [9]. All the images are all stored in the form of two-dimensional arrays, and the sizes are determined by the sample sizes of the sensor in the azimuth direction and the pitching direction.

The IR image synthesis process finally exports one radiance image, one transparency image, one truth image and one range image. These images contain all the objects in the sensor’s field of view. The specific calculation process is shown below.

  1. 1.

    Define the radiance images, transparency images, truth images and range images returned by the emitters/reflectors as \( \left\{ {S_{i} } \right\} \supset \left\{ {R_{i} ,F_{i} ,D_{i} ,T_{i} } \right\},1 \le i \le n \). Then we should determine the minimum distance values of every range image \( R_{M(1)} ,R_{M(2)} , \ldots ,R_{M(i)} , \ldots ,R_{M(n)} \):

$$ {\text{R}}_{{{\text{M}}({\text{i}})}} = \min\nolimits_{{1 \le {\text{k}} \le {\text{x}},1 \le {\text{l}} \le {\text{y}}}} \left[ {{\text{R}}_{\text{i}} \left( {{\text{k}},{\text{l}}} \right)} \right] \left( {1 \le {\text{i}} \le {\text{n}}} \right) $$
(1)

Here k and l mean that the pixel is located at row k, column l; \( R_{i} \left( {k,l} \right) \) is the distance value of the pixel at the position \( \left( {k,l} \right) \) of \( i \)th range image. Then sort the minimum distance values in descending order. The sorted minimum distance values are as follows: \( R_{M(L1)} \ge R_{M(L2)} \ge \cdots \ge R_{M(Li)} \ge \cdots \ge R_{M(Ln)} \), here \( 1 \le Li \le n \left( {1 \le i \le n} \right) \). The order of the range images, radiance images, transparency images and truth images corresponding to the minimum distance values also need to be adjusted as follows: \( \left\{ {S_{Li} } \right\} \supset \left\{ {R_{Li} ,F_{Li} ,D_{Li} ,T_{Li} } \right\},1 \le Li \le n \left( {1 \le i \le n} \right) \).

  1. 2.

    Then take a pixel point \( P\left( {k,l} \right) \) of the rendering image as an example, here k and l denote the position of the pixel in the image. So the distance value, radiance value, transparency value and truth value at this position of each rendering image are \( \left\{ {R_{P(i)} ,F_{P(i)} ,D_{P(i)} ,T_{P(i)} } \right\},1 \le i \le n \). Here the definitions of \( R_{P(i)} ,F_{P(i)} ,D_{P(i)} ,T_{P(i)} \) are as follows:

$$ R_{P(i)} = R_{Li} (k,l)\;(1 \le i \le n) $$
(2)
$$ F_{P(i)} = F_{Li} (k,l)\;(1 \le i \le n) $$
(3)
$$ {\text{D}}_{{{\text{P}}({\text{i}})}} = {\text{D}}_{\text{Li}} ({\text{k}},{\text{l}})\;(1 \le {\text{i}} \le {\text{n}}) $$
(4)
$$ {\text{T}}_{{{\text{P}}({\text{i}})}} = {\text{T}}_{\text{Li}} ( {\text{k}},{\text{l)}}\; (1 \le {\text{i}} \le {\text{n)}} $$
(5)

Then we sort the distance values of the pixels at the given position with the bubble sort algorithm and make them sorted in the descending order. The sorted distance values are \( R_{P(L1)} \ge R_{P(L2)} \ge \cdots \ge R_{P(Li)} \ge \cdots \ge R_{P(Ln)} \), here \( 1 \le Li \le n \left( {1 \le i \le n} \right) \). Correspondingly we adjust the order of the radiance values, transparency values and truth values \( \left\{ {S_{P(Li)} } \right\} \supset \left\{ {R_{P(Li)} ,F_{P(Li)} ,D_{P(Li)} ,T_{P(Li)} } \right\},1 \le i \le n \). Finally, all the pixels are traversed by this method in order to adjust the radiance value, distance value, transparency value and truth value of each pixel.

  1. 3.

    Once the final order of the pixels is determined, the radiance value of each pixel of the final synthetic image can be calculated. The calculation of the radiance value at any position of the synthetic image should begin from the top-level pixel, then propagate the radiance value of the second top-level pixel in turn forward, and end at the bottom-level pixel. Now the details are as follows: firstly we should calculate the synthetic radiance value of the far pixel and the near pixel at the given position. The synthetic value can be determined by follows:

$$ {\text{Rad}}_{\text{syn}} = \left( {{\text{Rad}}_{\text{far}} \times {\text{Tran}} + {\text{Rad}}_{\text{path}} } \right) \times {\text{D}}_{\text{near}} + {\text{Rad}}_{\text{near}} $$
(6)

Here \( Rad_{far} \) represents the radiance value of the far pixel, and \( Rad_{near} \) is the radiance value of the near pixel. \( Tran \) and \( Rad_{path} \) are the ratio of atmosphere transmission and the path radiation between these two pixels. \( D_{near} \) is the transparency value of the near pixel and \( Rad_{syn} \) is the synthetic radiance value of these two pixels. Then the result \( Rad_{syn} \) is assigned to \( Rad_{near} \) which can be used in the next step of the synthesis of the radiance value, namely the synthesis of the radiance value of the pixel next to the near pixel. Repeat this process until the pixel is the bottom pixel. Finally the distance value, transparency value and truth value are respectively equal to the distance value, transparency value and truth value of the bottom-level pixel.

  1. 4.

    When the final synthetic image of the emitters/reflectors is got, the atmospheric effects should be added to each pixel of the synthetic image. The specific calculation method is as follows:

$$ {\text{Rad}}_{\text{after}} = {\text{Rad}}_{\text{syn}} \times {\text{Tran}}^{ '} + {\text{Rad}}_{\text{path}}^{ '} + {\text{D}}_{\text{syn}} \times \left( {{\text{Rad}}_{\text{sky}}^{ '} - {\text{Rad}}_{\text{path}}^{ '} } \right) . $$
(7)

Here \( Rad_{syn} \) and \( D_{syn} \) respectively represent the radiance value and transparency value of the pixel in the synthetic image. \( Tran^{'} \), \( Rad_{path}^{'} \) and \( Rad_{sky}^{'} \) respectively represent the values of the ratio of atmospheric transmission, path radiation and sky radiation from the pixel point to the sensor object. \( Rad_{after} \) is the radiance value that has been added the atmospheric effects.

3 Simulation Example

We take the infrared imaging simulation of the helicopter as an example to verify the effectiveness of the proposed external rendering algorithm. As it is shown in Fig. 4, a reconnaissance plane equipped with an infrared sensor is monitoring the helicopter. In this infrared imaging simulation, we should take the terrain object, tree objects and atmospheric effects into account.

Fig. 4.
figure 4

Schematic diagram of the simulation scenario

3.1 Simulation Implementation

We take advantage of the CAD technology to create the 3D geometric models of the targets, trees and terrain. The geometric profile uses the mixed surface structure with the element of triangles and quadrangles. In the process of generating the target, we generate these unit structures with the mixed surface structures and generate the complete target’s geometry shape by the combination of these unit structures. Then we process the hidden line block of each surface structure by the coordinate transformation according to the viewing angle, height and the distance between the surface structure and the sensor. In this way we can get the geometric view of the target. We use the Z-buffer algorithm to generate the geometric view of the target.

After generating the 3D geometric models of the target, we assign the radiation brightness values to the model to represent the infrared radiation characteristics of the target. But the radiance images haven’t considered the atmospheric transmission effects. We store these infrared radiance images in the infrared image database. The whole process is as shown in Fig. 5.

Fig. 5.
figure 5

The process of creating the target model

3.2 Simulation Result

We store the simulation results in a database and we use the tool called SigView provided by JMASE to display the results [9]. The results are as shown in Fig. 6.

Fig. 6.
figure 6

The result of the infrared imaging simulation

From Fig. 6 we can see that there are four kinds of objects (target object, terrain object, tree objects and environment object) to be simulated. The radiance images, range images, truth images and transparency images are regarded as the inputs for the infrared sensor object. The infrared environment object’s outputs are synthetic radiance images, range images and truth images.

From this example, it can be seen that the design and implementation of the infrared environment model can correctly simulate the infrared imaging of the target and background. The relationship of the radiation intensity and interaction between the targets and the background can be clearly seen in the infrared images.

4 Conclusions

In this paper, we propose an external rendering algorithm for IR imaging modeling of the complex infrared scene. We study the segmentation and aggregation of the targets in the complex scene, and enumerate four applicable methods. We also propose a practical method of the synthesis of the IR Images. We also analyze and implement the external rendering algorithm with a case. Finally, we validate the effectiveness of the proposed method through case study. The future research work includes the distributed algorithm of the external rendering to improve the fidelity and the scalability.