Keywords

1 Introduction

Over the past few years, important researches have been conducted in the field of Advanced Driver Assistance Systems (ADAS), and autonomous vehicles devoted to vision-based vehicle detection for increasing safety in an on-road environment [1, 2].

The attempt to develop technologies related to the detection or prevention of moving object while driving represents a serious challenge in the choice of accident preventing approaches.

The fact of giving strong and dependable vehicle detection for visual sensors still remains a difficult step due to the diversity of shapes, dimensions, and hues portraying the on road vehicles.

Furthermore, the road infrastructures and close-by objects may give an introduction to complex shadowing and scene disarray which will lead to the reduction of the complete visibility of vehicles making their observation crucial.

Moving object detection by definition refers to the fact of identifying the physical movement of an object in a given region or area. Over the most recent couple of years, moving object detection has gotten a lot of fascination because of its extensive variety of uses and applications such as video surveillance, human motion analysis, robot route, anomaly identification, traffic analysis and security.

The moving object detection can also be influenced by all parts of the on road environment which is hard to control e.g. varieties in light conditions, out-of-control backgrounds and unexpected interactions among traffic members.

So, for this serious risk, we propose to develop a system which controls the road moving object in a real time and alerts driver in critical moment when they are exhausted in order to reduce car accidents.

The rest of the paper is devoted into three major points: The second section reviews some related works dealing with the moving detection approaches while the third deals with the proposed moving approach detection method. The fourth section presents the experimental results in this work.

2 Related Works

In this paper, we are concentrating on recent works to control the road moving objects (pedestrian, cars, pets, etc.) with assumptions, whether car technologies are going help reduce the road accidents number or not. According to literature, there are multiple categories of technologies that can detect moving objects. Moving object detection has turned into a focal subject of exchange in field of PC vision because of its extensive variety of utilizations like video surveillance, observing of security at airport, law authorization, automatic target identification, programmed target distinguishing proof, marine observation and human action acknowledgment [3]. A few routines have been proposed so forward for object detection, out of which Background Subtraction, Frame differencing, Temporal Differencing and Optical Flow [4] are broadly utilized customary systems.

Considerably, moving object detection turned out to be testing undertaking because of number of components like element foundation, light varieties, misclassification of shadow as item, disguise and bootstrapping issues.

2.1 Optical Flow

The clustering processing is done according to optical distribution characteristics of images. It detects the moving object from the background and the complete moving information of moving object is found. However, according to the large quantity of calculation and the sensitivity to noise, it makes unsuitable for real time applications [5].

2.2 Background Subtraction

The difference between the current image and the background image is used for the detection moving objects by using simple algorithm. It gives the most complete object information when the background is known. However, it has a poor anti-interference ability, and it has been sensitive to the changes which occur in the external environment [6].

2.3 Frame Subtraction

The difference between two consecutive images is taken to determine the presence of moving objects. The calculation in this method is very simple and easy to develop. However, it is difficult to obtain a complete outline of moving objects.

2.4 Temporal Differencing

It is worth noting that utilizing pixel-wise contrast technique among two progressive edges [7]. Conventional worldly contrast system is adaptable to element changes in the scenes. Yet, results corrupt when a moving target moves gradually since because of a minor distinction between successive edges, the article is lost. Also, trailing locales are recognized wrongly as moving item due to quick development of article, furthermore inaccurate detection will come about where items save uniform regions [8] (Fig. 1).

Fig. 1
figure 1

System overview

3 Proposed Approach

In our system, a smart camera has been attached on the dashboard of car. It takes images of different road moving objects (pedestrians, cars, cyclists, pets, etc.).

3.1 System Flowchart

In our system, a smart camera has been attached on the dashboard of car. It takes images of different road moving objects (pedestrians, cars, cyclists, pets, etc.).

3.2 System Flowchart

The system architecture flowchart is shown in Fig. 2, we try to validate our proposed system that controls risk level by calculating the distance to stop Ds. We note:

Fig. 2
figure 2

The flow chart of the proposed system

Ds = Distance to Stop (in meters), Dr = Reaction Distance, Db = Braking Distance, Tr = Reaction Time and S = Speed. D: the distance between our car and the detected object, it is given by the computer’s calculator.

  • Tr = 1 s for a vigilant person.

  • \( \uptheta \) = 1 in better weather and \( \uptheta \) = 1.5 in runny weather.

We calculate this distance Dr by formulas:

$$ Dr\, = \,Tr\,*\,\frac{{{\text{S}}*1000}}{3600} $$
(1)
$$ Db\, = \frac{{{\text{S}}*3}}{10} *\uptheta $$
(2)
$$ Ds \, = { [}({\text{Tr }}*\frac{{{\text{S}}*1000}}{3600}) \, + (\frac{{{\text{S}}*3}}{10} *\uptheta ) ] $$
(3)

In this work, we note:

We define three rules, to detect the risk state:

  • R1: If (D > Ds), Risk state = 0.

    • There is no risk.

  • R2: If ((D < Ds) and (D > Dr)), Risk state = 1.

    • There is a small risk, but the driver has time for a reaction to avoid an accident.

  • R3: Else, Risk state = 2.

    • There is a big risk; driver or co-pilot must brake immediately.

3.3 Background Subtraction

In this work, we use a Background Subtraction technique because it provides more indications in our application. The background subtraction technique is seen as to the most reliable and adequate method for moving objects detection. Background subtraction functions as following: first; it initializes a background model, then it contrasts between current frame and presumed background model which are obtained by comparing each pixel of the current frame with assumed background model color map. On the off chance that contrast between colors is more than threshold, pixel is thought to be fitting in foreground [9]. Execution of traditional background subtraction technique for the most part gets influenced when background is dynamic, brightening changes or in vicinity of shadow. Various strategies have been produced so forward to redesign foundation subtraction strategy and beat its downsides. Diverse systems for foundation subtraction as looked into by Piccardi et al. [10] are: Concurrence of image variations, Eigen backgrounds, Mixture of Gaussians, Kernel density estimation (KDE), Running Gaussian average, Sequential KD approximation and temporal median filter.

In this proposed system, by using a Gaussian Smooth operator in order to reduce image noise and details, we can dynamically change the threshold value according to the lighting changes of the two images obtained. This method can effectively reduce the impact of light changes. Here we regard first frame as the background frame directly and then that frame is subtracted from current frame to detect moving object.

$$ G\left( x \right) = \frac{1}{{\sqrt {2\pi \sigma^{2} } }}e^{{ - \frac{{{\text{x}}^{2} }}{{2\sigma^{2} }}}} $$
(4)

3.4 Object Detection

As it is known, a video is a gathering of fundamental structural units, for example, scene, shot and edge. Objects (cars, pedestrians, etc.) are distinguished by the technique for Viola-Jones. This strategy permits the location of items for which learning was performed [11, 12]. It was composed particularly with the end goal of face location and might be utilized for different sorts of articles. As an administered learning system, the strategy for Viola-Jones obliges hundreds to a great many samples of the located item to prepare a classifier. The classifier is then utilized as a part of a comprehensive quest for the item in all conceivable positions and sizes of the image to be prepared [13].

This system has the playing point of being compelling, fast. The system for Viola-Jones utilizes manufactured representations of pixel values: the pseudo-Haar characteristics. These attributes are controlled by the distinction of wholes of pixels of two or more contiguous rectangular areas (Fig. 3), for all positions in all scales and in a detection window. The number of features may then be high. The best peculiarities are then chosen by a technique for boosting, which gives a “solid” classifier all the more by weighting classifiers “weak”.

Fig. 3
figure 3

Examples of features used

The Viola-Jones algorithm uses the Haar-like features.

The exhaustive search for an item is inside an image which can be measured in computing time. Every classifier decides the vicinity or nonappearance of the item in the image. The least difficult and quickest classifiers are put in the first place, which rapidly disposes of numerous negative (Fig. 4).

Fig. 4
figure 4

Cascade of classifiers [14]

In general, the technique for Viola-Jones gives great results in the Face Detection or different articles, with few false positives for a low figuring time, permitting the operation here progressively [15].

The recognition of different road moving objects is essential to reduce the impact of having an accident.

4 Experimental Results

In this section, we are going to describe different experimental results developed throughout this work.

4.1 Database

To evaluate the performance of our proposed solution, we have implemented our approach on the KITTI dataset [16] which includes a street scene dataset for moving object detection (3 categories: car, pedestrian and cyclist). It contains 7481 images of street scene, 28,521 car objects [17] (Fig. 5).

Fig. 5
figure 5

Moving car detection and tracking in KITTI dataset

4.2 Results

We have implemented experiments on 18 test sequences from KITTI database.

Results are given in Table 1.

Table 1 Quantitative analysis of our proposed system

The estimation of this algorithm is made by the calculation of the rate of good detections moving object (GDR) using the following formula.

BDR: Bad Detection Ratio.

$$ {\text{GDR }} = \frac{\text{Number of detected moving object}}{\text{Total moving object}} $$
(5)
$$ {\text{BDR }} = 100 \, {-}{\text{ GDR}} $$
(6)

As a result of the analysis phase, we have obtained a rate of recognition favorable to the moving object detection. Our new approach improves to measurements of the detection of the system in the presence of the occlusion’s problem. It betters the results between 86 % up to 100 % of detection ratio (GDR).

5 Conclusion and Future Works

Throughout this work, we have presented a new system for Advanced Driver Assistance System. This work also offers a new system for controlling the outside car risks by detecting and tracking of different road moving objects. This developed system is based on computer vision techniques.

As perspectives, we are looking to propose a safety car assistance system that controls both: the inside car risks (the driver vigilant state) and the outside car risks (pedestrian, moving object, road lanes, and panel roads).