Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

15.1 Introduction

To make road vehicles fully autonomous has been studied for many years. Different approaches have been employed: color segmentation, control theory, neural networks, etc. Obstacle detection and avoidance is an open research area. Automatic recognition of traffic signs studies started more recently, and are increasing rapidly. It could be used as an assistant for drivers, alerting them about the presence of some specific signs or some risky situations.

Escalera et al. [1] proposed a method for traffic sign detection and classification. They employ color threshold to segment the image and shape analysis to detect signs. Then, a neural network is applied for classification. Hirose et al. [2] proposed the Simple Vector Filter to distinguish between objects and background pixels. The genetic algorithm with search limits is proposed to realize a real-time position recognition. Janssen et al. [3] proposed a scheme for traffic sign recognition by using color, shape and pictogram. Martinovic et al. [4] proposed a method for automated localization of certain traffic signs. And the speed limit is determined with number character partition.

Most of previous approaches [59] detect the traffic signs first by locating the border of traffic signs and then extract the information within the traffic signs. They employed complicated chromaticity models which need a lot of computations. This paper employs CIE-rg chromaticity space which requires simple computations, making hardware realization feasible. Therefore, we propose a hardware-software co-design scheme on FPGA, where regular computations are executed in parallel with hardware, resulting in detecting and recognizing speed-limit signs in real-time.

15.2 Detection and Recognition of Speed-Limit Signs

15.2.1 CIE-rg Chromaticity Space

It is obvious that the most efficient method for detecting traffic signs in the picture is to employ color characterization. In this paper, we employ CIE-rg chromaticity space as color characterization because of its simple computations. CIE-rg values can be derived from RGB values by Eq. (15.1). The speed-limit signs have a white background and are surrounded by a red ring. Hence, the detection of speed-limit signs can employ the processing of red-color parts. From Fig. 15.1, it is observed that red colors are almost resided in the triangle enclosed by blue lines, which is satisfied with Eq. (15.2).

$$ \left\{ {\begin{array}{*{20}l} {r = R/(R + G + B)} \\ {g = G/(R + G + B)} \\ \end{array} } \right. $$
(15.1)
$$ \left\{ {\begin{array}{*{20}l} {r \ge 0.329} \\ {g \le 0.324} \\ {g \le r - 0.341} \\ \end{array} } \right. $$
(15.2)
Fig. 15.1
figure 1

CIE-rg chromaticity space

It is obvious that the rg values are ranging from 0 to 1. In order to reduce hardware resources, the required floating-point computations need to be transferred to integer computations. Hence, r′g′ values are introduced, which are rg values multiplied by 1,024 as shown in Eq. (15.3). The pixels with red color can be identified by Eq. (15.4). With Eq. (15.4), the picture with RGB can be transferred to the binary image, where 1 represents red color and 0 non-red color.

$$ \left\{ {\begin{array}{*{20}l} {r^{'} = R*\text{1,024}/(R + G + B)} \\ {g^{'} = G*\text{1,024}/(R + G + B)} \\ \end{array} } \right. $$
(15.3)
$$ \left\{ {\begin{array}{*{20}l} {r^{\prime} \ge 377} \\ {g^{\prime} \le 332} \\ {g^{\prime} \le r^{\prime} - 35} \\ \end{array} } \right. $$
(15.4)

15.2.2 Noise Eliminating and Boundary Smoothing

In this paper, the detection of speed-limit signs is conducted by recognizing the boundary of red-colored ring. It is important to eliminate noises and to smooth the boundary of red-colored regions. Here, 5 × 5 template is employed. Let X denote the center of 5 × 5 template and S the number of 1’s in the template. By Eq. (15.5), noise eliminating and boundary smoothing can be conducted with different δ l and δ u . Note that X remains unchanged if δ l  < S < δ u .

$$ {\text{X}} = \left\{ \begin{array}{lll} 0,& \text{if}& S \le \delta_{l} \\ 1,& {\text{if }}&S \ge \delta_{u} \\ \end{array} \right. $$
(15.5)

Noise eliminating process is conducted gradually. At first, the process are conducted several times with δl = 6 and δu = 19, resulting in small noises being eliminated. Secondary, the process are conducted several times with δl = 8 and δu = 17 to remove big noises. Finally, the process are conducted several times with δl = 10 and δu = 15 to ensure that all noises are removed. Finally, boundary smoothing can be derived with δl = 11 and δu = 14.

15.2.3 Recognizing the Inner Boundary of the Red-Colored Ring

In this paper, assume that the picture is of size 800 × 400 pixels. The speed-limit signs with the diameter of their inner area larger than 40 pixels are considered. Hence, only rows 35, 70, …, 385 need to be scanned for searching all the speed-limit signs. The scanning begins with the left end of each scan-line. The next scan-line is going to be searched when the right end of the current scan-line is reached, as shown in Fig. 15.2a–c. Let pixel (L x , L y ) represent the second visited boundary pixel with red color and (S x , S y ) the third visited boundary pixel with red color. When (S x , S y ) is reached, then scan the boundary clockwise. The case that (S x , S y ) is in the outer boundary is considered first, as shown in Fig. 15.2d. (S x , S y ) will be reached again and the scanning is restarted from (S x-1, S y ), as shown in Fig. 15.2e. In the other case, (L x , L y ) will be reached first and the inner boundary is found, as shown in Fig. 15.2f.

Fig. 15.2
figure 2

Recognizing the inner boundary of the red-colored ring

15.2.4 Extracting Boundary Information

The vector from the currently visiting pixel to the next visiting pixel is with one of eight degrees 0°, 45°, …, 315°. The vector codes 0, 1, … 7 denote degrees 0°, 45°, …, 315°, respectively. The labeling of adjacent pixels of the currently visiting pixel is shown in Fig. 15.3. As described before, the scanning of the inner boundary is conducted clockwise. Initially, the base pixel is with red color. Pixel(0) must be with red color because of eliminating noises. Then we search the adjacent pixels from pixel(0) clockwise until the first adjacent pixel with non-red color pixel(j) is visited. The vector code ((j + 1) mod 8) is derived and the scanning is moved to pixel ((j + 1) mod 8). In the following, the vector code is derived similarly except for the selection of the starting adjacent pixel. Since the smoothing has been conducted, the difference between the current vector code and the last vector code is at most 1. Supposed that the last vector code is i, the starting adjacent pixel is set to be pixel ((i + 1) mod 8).

Fig. 15.3
figure 3

Labeling adjacent pixels

The quasi-curvature of the segment is defined as the difference between the sum of vector directions of the first half and that of the second half. For simplicity, the vector code is employed as the vector direction. The length of the segment (2n) will be described in the next section. Supposed that the first half consists of vectors F1, F2, …, Fn, and the second half consists of vectors S1, S2, …, Sn. Without ambiguity, vector representation is used for vector code. The quasi-curvature of the segment can derived by Eq. (15.6). The difference of two vectors is difficult to calculate correctly. However the difference of two adjacent vectors is simple to calculate correctly. (F1−S1) can be reformulated as n differences of adjacent vectors, i.e., (F1\,−\,F2) + (F2\,−\,F3) + … + (Fn\,−\,S1). Other subtractions in Eq. (15.6) can be reformulated similarly.

$$ {\text{C}} = \left( {F_{1} + F_{2} + \ldots + F_{n} } \right) - \left( {S_{1} + S_{2} + \cdot \cdot \cdot + S_{n} } \right) = \left( {F_{1} - S_{1} } \right) + \left( {F_{2} - S_{2} } \right) + \cdot \cdot \cdot + \left( {F_{n} - S_{n} } \right) $$
(15.6)

The segment with the quasi-curvatures −1, 0, or +1 is defined as the straight segment (π). The segment with the quasi-curvature less than −1 is defined as the clockwise curve segment (ρ). Similarly, the segment with the quasi-curvature greater than +1 is defined as the counter-clockwise curve segment (φ). If the inner boundary consists of ρ or ρπρπ, a speed-limit sign is detected.

15.2.5 Recognizing Speed

The speed-limit sign has black numbers with white background. The first step is to derive the histogram of gray values of the inner area. The suitable threshold is determined from the derived histogram. The binary image thus can be generated by the determined threshold. By the method described in Sect. 15.2.2, noise elimination and boundary smoothing are applied on the binary image.

From the observations of numbers, the length of the segment (2n) is determined by the length and width of the character as follows.

$$ 2 {\text{n}} = { \hbox{min} }({\text{width}} \times 0.6,{\text{length}} \times 0.4) $$
(15.7)

From previous subsection, the boundary can be decomposed into several straight segments, clockwise curve segments and counter-clockwise curve segments. However, some recognized straight segments may not be straight, because their first half and second half are symmetric. The recognized straight segments must be verified by various segment lengths (\( ({\text{sub}} = 2 \times \left\lfloor {n/2} \right\rfloor ^{i} ) \), for sub > 2. Each number character can be coded with a sequence of segment types starting from the top. Each number character may have several codes with various patterns. With all of these codes, the search tree is established. In the recognition process, the derived code is matched through the search tree for recognition.

15.3 System Design

In this paper, we use DE2-70 FPGA board supported by Altera. It contains an embedded processor NIOS II and a CMOS sensor. We need to configure data transfer module and processing module through FPGA. Data transfer module is responsive for data transmission between NIOS II and processing module. Processing module conducts the generation of binary red image, noise elimination and smoothing, which require a lot of computations. Other operations are done by software with NOIS II (Fig. 15.4).

Fig. 15.4
figure 4

System architecture

15.4 Performance Evaluation

We have collected 104 pictures. In one of pictures, the red ring of the speed-limit sign is partly covered by tree leaves. The speed-limit signs in 103 pictures are detected correctly. One traffic sign covered by tree leaves can not be detected because the red ring is not recognized. Thus, the detection rate is 99 %. The speed-limit in 103 pictures are recognized correctly. Hence, the recognition rate is 100 %.

In DE2-70, it only needs 62.558 ms totally to recognize a picture, resulting in meeting real-time requirement. Data transfer takes 15.25 ms. Software processing in NOIS II takes lot of times (47.308 ms). Noise elimination and smoothing with hardware costs a little time (0.544 ms), which can be executed in parallel with the data transfer.

15.5 Conclusion

This paper proposes a hardware-software co-design scheme on FPGA, where regular computations are executed in parallel with hardware, resulting in detecting and recognizing speed-limit signs in real-time. The experiments show that our proposed scheme can detect and recognize a speed-limit sign in 62.558 ms. The detection rate is about 99 %. And the recognition rate is 100 %.

The boundary detection algorithm can be enhanced to recognize the border of traffic signs covered by tree leaves. Extending to most of traffic signs is worthy to be investigated in the future.