1 Introduction

Autonomous robot navigation with obstacle avoidance is an important task for the correct use of mobile robots in many applications such as surveillance systems, industrial, assistive and service robotics (Freda and Oriolo 2007; Guzel and Bicker 2011; Wang and Meng 2015). Robot navigation is the process to compute motion sequences allowing the controlled robot to move in the work space successfully without human actions (Shuzhi and Lewis 2006). In complex and cluttered environments, the motion system needs a degree of intelligence to control the mobile robot without collision with the existing objects in the environment (Latombe 1991). In robotic applications, different sensors have been used to accomplish the robot navigation tasks (localization, interception, obstacle avoidance or navigation) (Ayache and Sander 1991; Boumehraz et al. 2018). The camera is a very powerful vision sensor used in robotics. It mimics natural beings vision for defining visual properties of the surrounding environment as colors and shapes. Computer vision is an interesting tool allowing measurements and environment detection without contact with objects. The most known applications are: image registration and enhancement, matching for stereo vision, pattern recognition, and motion estimation and analysis (Saitoh et al. 2009; Desouza and Kak 2002). The advantage of vision in robotic applications is to operate and interact in their environments with more precision and intelligence.

Different methods have been proposed to solve the robot navigation problem. Visual-based navigation method is one of the most important approaches used in robotic field (Gupta et al. 2008; Fantoni and Sanahuja 2014; Miguel and Pascual 2015; Zhang et al. 2016). It has become a source of countless researches where the vision strategies can increase the utility of mobile robots in different domains (Alenya and Crowley 2009; Guzel and Bicker 2011; Becerra et al. 2014). To use vision in robotics, two main methods can be employed; the first based on computing the apparent motion of objects, while the second relying on the appearance of each pixel in the image (Guzel and Bicker 2011).

The proposed vision-based navigation methods have been applied in indoor and outdoor robot environments (Desouza and Kak 2002; Saitoh et al. 2009; Benn and Lauria 2012). Some contributions are summarized in Desouza and Kak (2002) and Font et al. (2008). In Font et al. (2008), authors presented a survey of visual navigation techniques for mobile robots, aerial and autonomous underwater vehicles. Two major approaches are discussed: map-based navigation and mapless navigation. In Desouza and Kak (2002) vision-based navigation and localization methods for mobile robots are widely surveyed regarding application in indoor and outdoor environments.

Tasalatsanis et al. (2007) proposed real-time mobile robot navigation based on object tracking and obstacle avoidance tasks. Stereo vision system is used to detect the desired target while the laser finder is used to avoid collision with surrounding objects.

In Benn and Lauria (2012), authors have introduced the use of monocular vision for robot navigation in dynamic and changed environment. The proposed solution to detect obstacles is based on color segmentation and edge detection.

In the case of robot applications in unstructured environment without prior knowledge, the most used approach for vision-based robot navigation is based on optical flow method (Serres et al. 2006; Kahlouch and Achour 2007; Gupta et al. 2008; Chao et al. 2014; Wang and Meng 2015; Tajti et al. 2016). Optical flow (OF) approach is a velocity calculation strategy used in many fields, such as motion estimation, video compression–reconstruction, and image segmentation (Benn and Lauria 2012; Corso 2014). In the case of motion estimation, the motion vector is calculated by the difference between the current frame and the predicted one (Corso 2014; Serres and Ruffier 2017). Then, this vector is coded to reconstruct the motion values. In optical flow theory, there are two basic problems: the illumination and the discontinuities of motion (Guzel and Bicker 2011). The pattern of apparent motion is highly sensitive to lighting of the environment and to the noise.

Optical flow is an important tool for mobile robot vision-based navigation (Corso 2014; Chao et al. 2014). This approach has been employed mainly in motion estimation, detection and avoiding obstacles (Kahlouch and Achour 2007; Guzel and Bicker 2011; Corso 2014; Chao et al. 2014). For example, in Kahlouch and Achour (2007), authors have studied the obstacle avoidance task by using a balancing strategy based on the amount of left and right side flows. The depth is calculated from two consecutive images. This technique allows the mobile robot to move safely without collisions. Gupta et al. (2008) presented a modified version of Continuously Adaptive Mean Shift algorithm (CAMShift) for fast moving object tracking. Fuzzy based structures are used to estimate the depth and the rotation angle in order to reach the desired object. Wang and Meng (2015) proposed an improved method based on optical flow for obstacle avoidance to be used by a quadrotor equipped with a monocular camera. It consists of a balanced strategy based on optical flow method, and a fast Time-to-Collision (TTC) to extract the relative depth information of the environment. The controlled system is able to avoid the obstacles effectively. Other applications of optical flow techniques for robotic applications are presented in Chao et al. (2014).

However, the use of optical flow approach for mobile robot navigation consists to generate discrete control actions. Fuzzy logic is one of the most common approaches to solve this limitation. Fuzzy logic can achieve a good level of performances with high smoothness and continuous actions. The main advantage of visual based fuzzy control is the introduction of human experience in the form of IfThen rules about how to control the system based on the used variables.

In Miguel (2009), an implementation of two parallel fuzzy logic controllers is proposed for a Pan-Tilt camera vision mounted on Unmanned Aerial Vehicle (UAV). The system used a Lucas–Kanade tracker. The designed controllers present good performances in real flights for statics objects. Whereas in Miguel and Pascual (2015), with the same idea, the authors proposed an approach of vision based fuzzy control for autonomous UAV navigation. The system uses an RGB camera with specific software to accomplish the different tasks in real experiments (autonomous landing, object following/inspection and avoiding collisions).

In Kim and Noh (2012), primitive behaviors based on fuzzy visual navigation methods are proposed for humanoid robots (walking autonomously, tracking a target, obstacle avoidance, and marker recognition).

For mobile robot applications, a bio-inspired approach for optical flow data interpretation and integration based on fuzzy inference system is proposed for visual robot navigation (Mai and Janschek 2009). The variables are based on the treatment of regionally optical flow patterns. A segmentation step is used for the detection of topological and topographic surrounding robot environment.

In Hamissi and Bazoula (2008), authors introduced a fuzzy path following task based visual navigation method. To detect the desired path, a CCD camera is used on the mobile robot. From each captured image, the useful information is extracted, and the designed fuzzy logic controller can guide the robot to follow the path effectively.

Guzel and Bicker (2011) have considered vision based obstacle avoidance using fuzzy logic approach. The obstacle avoidance task is studied using Scale Invariant Feature Transform (SIFT) and fuzzy logic. This method is inspired from the principle of feature matching, where the SIFT-based descriptors use textured and structured scenes to match images. The FLC generates the control steering action to avoid collisions.

Vision based tracking is also an important topic for visual robot navigation. The objective is to track and intercept moving objects in the environment (Freda and Oriolo 2007; Tasalatsanis et al. 2007; Huang 2009; Zhang et al. 2016; Boumehraz et al. 2018) have studied this task.

The objective of the present paper is the use of Horn and Schunck optical flow method and fuzzy logic systems for mobile robot navigation. The simulation is done in 3D environment using Virtual Reality Modeling Language (VRML).

The paper is organized as follows: In Sect. 2, we present a necessary background of optical flow method and concepts. A brief description of fuzzy logic system is presented in Sect. 3. The model of the mobile robot is introduced in Sect. 4. The proposed fuzzy visual controllers are introduced and explained in Sect. 5. Then simulation results and discussion are shown in Sect. 6. Finally, conclusions of the paper are given in Sect. 7.

2 Optical flow approach

2.1 Optical flow

Optical flow (OF) is a velocity calculation strategy based on the distribution of brightness movement in the image (Corso 2014). This method gives important features about the spatial arrangement of the viewed objects by tracking the motion vectors (Horn and Schunck 1981; Chao et al. 2014). The velocity value is defined by:

$$I_{x} u + I_{y} v + I_{t} = 0$$
(1)

where Ix, Iy and It are the spatiotemporal image brightness derivatives, u is the horizontal optical flow and v is the vertical optical flow. This equation can be solved by different methods (Horn and Schunck 1981).

In optical flow theory, there are several methods and can be classified in:

  1. 1.

    Differential techniques,

  2. 2.

    Region-based marching,

  3. 3.

    Energy based methods,

  4. 4.

    Phase based techniques.

Horn–Schunck (HS) (Horn and Schunck 1981) and Lucas–Kanade (Lucas and Kanade 1981) belongs to differential Techniques.

In this work, we used Horn–Schunck (HS) algorithm to detect obstacles for mobile robot navigation. To use and implement HS algorithm, two main processes are used. The first one is the estimation of the partial derivatives, and the second involves an iterative process to minimize errors in the motion vector (Wang and Meng 2015; Horn and Schunck 1981; Corso 2014).

2.2 Horn–Schunck method

The HS algorithm is one of the classical algorithms in optical flow due to its acceptable performance and simplicity. It was proposed by Horn and Schunck (1981). It is a differential method used in many applications. This algorithm is used to adjust the kernel model using the density of velocity (Corso 2014).

The Horn–Schunck method (Horn and Schunck 1981) computes the estimation of the velocity field, \(\left[ {UV} \right]^{T}\) that minimizes:

$$E = \int {\int {(I_{x} u + I_{y} v + I_{t} )} }^{2} dxdy + \alpha \iint {\left\{ {\left( {\frac{\partial u}{\partial x}} \right)^{2} + \left( {\frac{\partial u}{\partial y}} \right)^{2} + \left( {\frac{\partial v}{\partial x}} \right)^{2} + \left( {\frac{\partial v}{\partial y}} \right)^{2} } \right\}}dxdy$$
(2)

where \(\frac{\partial u}{\partial x}\) and \(\frac{\partial u}{\partial y}\) are the spatial derivatives of the optical velocity component. u and v are scales of the global smoothness term. The Horn–Schunck method minimizes (2) to obtain the velocity field [u v] for each pixel. Velocity values are given by:

$$u_{x,y}^{k + 1} = \overline{u}_{x,y}^{k} - \frac{{I_{x} \left[ {I_{x} \overline{u}_{x,y}^{k} + I_{y} \overline{v}_{x,y}^{k} + I_{t} } \right]}}{{\alpha^{2} + I_{x}^{2} + I_{y}^{2} }}$$
(3)
$$v_{x,y}^{k + 1} = \overline{v}_{x,y}^{k} - \frac{{I_{y} \left[ {I_{x} \overline{u}_{x,y}^{k} + I_{y} \overline{v}_{x,y}^{k} + I_{t} } \right]}}{{\alpha^{2} + I_{x}^{2} + I_{y}^{2} }}$$
(4)

In these equations, \(\left[ {u_{x,y}^{k} ,v_{x,y}^{k} } \right]\) is the velocity estimate for the pixel at (x,y), and \(\left[ {\overline{u}_{x,y}^{k} ,\overline{v}_{x,y}^{k} } \right]\) is the neighborhood average of \(\left[ {u_{x,y}^{k} ,v_{x,y}^{k} } \right]\). For k = 0, the initial velocity is 0.

To solve u and v using the Horn–Schunck method, we can use the following steps (Guzel and Bicker 2011):

  1. 1.

    Compute \(I_{x}\) and \(I_{y}\) using the Sobel convolution kernel, [− 1 − 2 − 1; 0 0 0; 1 2 1], and its transposed form for each pixel in the first image.

  2. 2.

    Compute \(I_{t}\) between images 1 and 2 using the [− 1 1] kernel.

  3. 3.

    Assume the previous velocity to be 0, and compute the average velocity for each pixel using [0 1 0; 1 0 1;0 1 0] as a convolution kernel.

  4. 4.

    Iteratively solve for u and v.

2.3 Focus of expansion (FOE)

The focus of expansion (FOE) is defined as a projection of the translation vector onto the image plane (Corso 2014). FOE is the point towards the camera is moving. In the image frame, points are depicted to be expanding outward as shown in Fig. 1a. The location of the FOE gives information about the direction of 3D translation. The FOE is used also in three-dimensional reconstruction, time-to-contact calculation, and obstacle avoidance (Alenya and Crowley 2009; Corso 2014).

Fig. 1
figure 1

a Focus of expansion. b Results of FOE (color figure online)

Additionally, the FOE represents the projection point in the image allowing to obtain information about the depth of some pixels and the FOE. This information is called Time To Contact (TTC) (Arnspang et al. 1995). In a situation where the camera is moving forward, the Focus of Expansion point is shown as in the Fig. 1b (red circle).

2.4 Time to contact (TTC)

The time to contact concept (TTC) was first introduced by Lee (1976). The goal of time to contact for robot navigation is to estimate the free space in front of the robot and, thus, to decide if the robot has to turn (right-left) or to stop when an obstacle is detected. This problem has been studied by many authors such as Lee (1976), Arnspang et al. (1995) and Alenya and Crowley (2009). In the case of a camera movement along the optical axis Z, the time to contact is expressed by the speed and the distance of the detected obstacle (Benn and Lauria 2012; Wang and Meng 2015). The TTC is computed as follows:

$$TTC = \frac{ - Z}{dZ/dt}$$
(5)

where Z is the distance camera-obstacle, and dZ/dt is the velocity of the robot camera.

3 Fuzzy logic system

Fuzzy logic controller (FLC) is a system inspired from human capacities to reason based on linguistic informations. It has the property to overcome and handle the existing uncertainties. FLC is based on linguistic rule-base to represent the operator decision strategy. The fuzzy controller is composed of the following parts: Fuzzification, Rule-based, Inference Mechanism and Defuzzification (Passino and Yurkovich 1998) as shown in Fig. 2.

Fig. 2
figure 2

Fuzzy logic system

4 The model of the mobile robot

In this section, we present the simulated mobile robot (MR) in its environment using Virtual Reality Modeling Language (VRML). The used mobile robot is a cylindrical platform with two motorized wheels. It is similar to the real mobile robot (ED7273). In order to perceive its environment, the robot is endowed by a virtual camera, where the objective is to navigate autonomously. The simulated mobile robot is illustrated in Fig. 3a. Using VRML library, we have created a virtual navigation environment that imitates the real space in 3D containing obstacles at the form of boxes, floor, walls and the goal. The designed 3D environment is depicted in Fig. 3b.

Fig. 3
figure 3

Structure of the simulated robot environment

The robot motion is based on the nonholonomic kinematic model described as follows:

$$\begin{aligned} \dot{x}_{r} & = v\cos (\theta_{r} ) \\ \dot{y}_{r} & = v\sin (\theta_{r} ) \\ \dot{\theta }_{r} & = w \\ \end{aligned}$$
(6)

where:

  • xr and yr are the robot coordinates;

  • θr is the heading angle;

  • v is the linear velocity;

  • w is the angular velocity calculated from the steering angle αr.

Figure 4 presents a schematic of the robot and the used parameters.

Fig. 4
figure 4

Model of mobile robot

  • R is the radius of the robot platform;

  • ICR is the instantaneous center of rotation;

  • vr and vl are the right and left velocities of wheels;

  • drg is the distance between the robot and the desired goal;

  • θrg is the angle between the robot axis and the goal.

5 Proposed navigation system

The proposed robot navigation system is discussed in this section. The objective is to control the wheeled mobile robot to move autonomously in its environment using the acquired image by the camera. We have studied the different robot tasks (functionalities): visual obstacle avoidance, goal seeking, interception of a moving target, navigation with obstacle avoidance.

In this work, we take the following assumptions:

  • The robot and the goal move on the same plane,

  • Obstacles and objects in the simulated environment are static.

  • The camera is mounted on the robot with a fixed position (without considering pan-tilt movements).

5.1 Structure parts

This part describes the control parts that have been employed in our work. The block diagram of this control strategy is presented in Fig. 5. Using the extracted information from the captured image, the mean flow values on the right side (Fr) and the left flow (Fl) are calculated using the equations:

$$F_{r} (t) = \frac{1}{N}\sum\limits_{{F \in S_{r} }} {F(t)}$$
(7)
$$F_{l} (t) = \frac{1}{N}\sum\limits_{{F \in S_{l} }} {F(t)}$$
(8)

where N is the number of flow elements in the half right or the left of the image, Sl is the half left of the image and Sr is the half right of the image.

Fig. 5
figure 5

Visual obstacle avoidance navigator

The time to contact (TTC) is calculated using (5). This variable is used as a condition to activate the control system when the robot is near to obstacles. The error between the right and the left flows noted (Er) is calculated with its variation (ΔEr). These two variables are obtained using the following equations:

$$E_{r} (t) = F_{l} (t) - F_{r} (t)$$
(9)
$$\Delta E_{r} (t) = Er(t) - Er(t - 1)$$
(10)

They are used by the control system in order to carry out the robot motion actions: the steering angle α and the robot velocity v. As it can be seen from the structure presented in Fig. 5, the designed fuzzy controller uses variables from the captured image to infer a local motion action to avoid the neighboring obstacles. The robot motion is defined by the kinematic model shown in Eq. 6.

5.2 Virtual camera

To perceive the environment, the robot is equipped with a single camera with high resolution. The image comprises 320 × 480 pixels. The advantage of using a single camera is its low cost and easy mounting on the robot due to its reduced weight and size.

5.3 Optical flow calculation

As mentioned previously, Horn–Schunck optical flow algorithm (HS) is used to estimate the velocity vectors of the image.

Firstly, for each time step, the image of the environment is captured by the simulated VRML camera situated on the robot. Then, the optical flow vectors are computed from the image sequence (for one decision value, two images are needed at times t − 1 and t. The optical flow values of the image pixels are calculated using HS algorithm.

Figures 6 and 7 show the captured image by the camera in color, and the generated optical flow values respectively.

Fig. 6
figure 6

Captured image

Fig. 7
figure 7

Optical flow and FOE

In our application, as depicted in Fig. 8, the image is divided in two parts: right and left. The objective is to detect obstacles. The control system allows the robot to avoid collisions successfully. For each part of the image, the algorithm calculates the mean value of optical flow (Fr and Fl).

Fig. 8
figure 8

Dividing the image

5.4 Takagi–Sugeno fuzzy controllers

5.4.1 Fuzzy obstacle avoider

In order to avoid collision with the nearest obstacles and objects in the path of the robot, we have used Takagi–Sugeno fuzzy logic controller with two inputs and one output (the robot moves with a fixed speed). Input variables are the difference of flow values and its variation; Er and ΔEr as depicted in the block diagram shown in Fig. 5. These two variables are fuzzified using five membership functions (MFs) shown in Figs. 9 and 10 respectively. We have used triangular and trapezoidal MFs because they have the advantage to cover efficiently and correctly the used variables (Passino and Yurkovich 1998). The output action is represented by singletons as shown in Fig. 11. The used linguistic values for input–output variables are: Z: Zero, PS: Positive Small, PB: Positive Big, NB: Negative Big, NS: Negative Small. The control strategy is expressed symbolically by the IfThen fuzzy rules given in Table 1. The fuzzy rule-base is in the form:

$$IF\,Er\,is\,A_{1}^{i} \,and\,\Delta Er\,is\,A_{2}^{i} \,THEN\,\alpha_{g} \,is\,B_{2}^{i}$$
(11)

where i = 1…25, Ai1…Ai2 are the input fuzzy sets, Bi1 are the membership functions of the control action (output).

Fig. 9
figure 9

MFs of Er

Fig. 10
figure 10

MFs of ΔEr

Fig. 11
figure 11

Values of the output control

Table 1 Fuzzy rule base for avoidance

5.4.2 Goal seeker

In the case of navigation toward a desired destination (goal), the robot must have the capacity to move in the direction of the goal with obstacle avoidance task. A second Takagi–Sugeno (T–S) fuzzy controller is used in 3D VRML environment to drive the robot to the goal coordinates (xg,yg). The global control structure is shown in the block diagram of the Fig. 12. It consists of both T–S fuzzy controllers (Obstacle Avoider and Goal Seeker). The second uses as inputs the distance between the robot and the goal (drg) and the angle \(\theta_{rg}\). These two variables are calculated using the following equations:

Fig. 12
figure 12

Global control structure

$$d_{rg} = \sqrt {\left( {x_{g} - x_{r} } \right)^{2} + \left( {y_{g} - y_{r} } \right)^{2} }$$
(12)
$$\theta_{d} = arctg\left( {\frac{{y_{g} - y_{r} }}{{x_{g} - x_{r} }}} \right)$$
(13)
$$\theta_{rg} = \theta_{d} - \theta_{r}$$
(14)

The inputs are defined by means of the membership functions depicted in Figs. 13 and 14. This T–S fuzzy controller generates the appropriate steering angle and velocity. These control actions are represented by singletons as shown in Figs. 15 and 16 respectively, where: VN: Very Near, NR: Near, FR: Far, VFR: Very Far, NB: Negative Big, NM: Negative Medium, NS: Negative Small, ZZ: Zeros, PS: Positive Small, PM: Positive Medium, PB: Positive Big, VS: Very Slow, SL: Slow, FS: Fast, VF: Very Fast.

Fig. 13
figure 13

MFs of the distance drg

Fig. 14
figure 14

MFs of the angle θrg

Fig. 15
figure 15

Values of the steering

Fig. 16
figure 16

Values of the robot velocity

The rule-base used for this autonomous task is summarized in Table 2.

Table 2 Fuzzy rules for the goal seeking behavior

6 Results and discussions

In this section, to demonstrate the efficiency and the validity of the presented approach, examples of mobile robot navigation in the 3D Virtual Reality Toolbox are presented. The 3D simulated environment is shown in Fig. 17.

Fig. 17
figure 17

3D simulated environment

6.1 Obstacle avoidance

To test the visual obstacle avoidance task, we consider firstly the robot motion without goal seeking based on the structure presented previously in Fig. 5. In this application, we have applied the following conditions:

  • If the environment is free of obstacles, the mobile robot moves forward (α = 0).

  • Else, the robot executes the generated action by the fuzzy controller to avoid collisions according to the values of Er, ΔEr.

The simulation results of this task using the first fuzzy controller are presented Fig. 18. This example presents the scenes of mobile robot navigation starting from a given position (x0,y0) in order to move in its environment freely without collision with obstacles. For each frame, the camera view and the optical flow vectors are visible in the top left and right corners, respectively. The acquired image indicates the state of the environment in the robot front side. Then, optical flow values are calculated allowing the robot to know the position of obstacles and their forms in the two parts. The detected and the nearest obstacle are defined by the flow vectors (in yellow color).

Fig. 18
figure 18

Frames of robot navigation (captured image in top left, optical flow calculated in top right)

The developed fuzzy avoidance system is able to generate the control action of steering to avoid collisions. As it can be seen from these scenarios, the fuzzy controller is effective to accomplish this task with good performance. The robot moves freely and autonomously.

The obtained path in 2D environment is shown in Fig. 19, illustrating the robot ability to detect and avoid obstacles. In this figure, we indicate the robot position presented previously in frames by the points (A,…, J). From the achieved robot path, the proposed controllers present good level of performances.

Fig. 19
figure 19

Robot path

Figures 20 and 21 give the variation of right–left flow values and the difference between them as errors. The time-to-Contact (TTC) is computed from the optical flow values of the image sequences acquired during robot motion. In Fig. 22, we show the estimation of the TTC. For each graph, as example, the moment of obstacle avoidance is indicated in dashed lines (turn left or right).

Fig. 20
figure 20

Optical flows Fleft and Fright

Fig. 21
figure 21

Error of flows

Fig. 22
figure 22

Time to contact

A second example for mobile robot navigation is given in Fig. 23.

Fig. 23
figure 23

Example of robot navigation

6.2 Goal seeking

In this second test, we demonstrate the efficiency of the designed fuzzy goal seeker in 3D environment. The results are given in Fig. 24a–c presenting the robot positions during movement (at the first, in motion to the target and at the final destination). For each scene, the observed environment by the camera is depicted for each position. In this example, we have considered that the mobile robot starts from the point (xr = 5, yr = 5) with \(\theta_{r} = 180^{\circ}\) and moved to the goal position (xg = 15, yg = 18) in yellow color.

Fig. 24
figure 24

Frames in the start, the middle and the end

As depicted, the designed fuzzy controller steers the robot to reach the final goal. Figure 25 illustrates the robot path in 2D to reach the target. Smoothness and correct actions are generated to drive the robot toward this desired point as depicted in Fig. 26a, b. As depicted, the robot steers at beginning and then the steering angle is fixed to zero. The speed action decreases when approaching to the goal. Figure 26c indicates the variation of the distance between the robot coordinates and the goal that converges to zero.

Fig. 25
figure 25

Robot trajectory for goal seeking

Fig. 26
figure 26

Curves of steering, speed and the distance to the goal

The proposed fuzzy controller for goal seeking has shown their effectiveness to accomplish this task. It gives the robot the capability to navigate autonomously.

6.3 Goal seeking with obstacle avoidance

In the case of navigation in the presence of obstacles, the robot uses the two designed fuzzy controllers based on the structure indicated previously in Fig. 13. Using the fuzzy goal seeker, the robot moves in the direction of the goal freely, but if an obstacle is detected by the optical flow strategy; the second fuzzy obstacle avoider is activated to generate avoidance actions.

Example of these autonomous sub-tasks is given in Fig. 27a–d. The results show the robot positions with the observed image of the environment in top left. In the case of obstacle detection, the optical flow values are calculated (in top right) which indicate the application of the fuzzy obstacle avoider based optical flow. The robot is able to navigate autonomously and successfully to reach the goal with avoiding collisions. The results show the effectiveness of the proposed fuzzy control strategy. The obtained robot trajectory in 2D is illustrated in Fig. 28. The situations of the scenes (a–d are indicated in this figure by the points (a–d).

Fig. 27
figure 27

Frames of robot navigation (a-b-c-d)

Fig. 28
figure 28

Robot trajectory for goal seeking with obstacle avoidance

As depicted, the robot is able to navigate autonomously. It can reach the final goal by avoiding obstacles successfully. The stability and the smoothness of the robot motion illustrate the efficiency of the proposed control systems.

6.4 Pursuing a moving target with obstacle avoidance

In this section, the proposed visual fuzzy navigators are tested to track a moving target in its environment. The objective is to verify their performances.

The simulation results of this task are shown in Fig. 29a–c. The proposed controllers can steer the robot toward the target to pursue its motion effectively as depicted in Fig. 30. This figure shows the trajectories of the robot and the moving target (as indicated by the positions: A–C). The controllers can guarantee a good tracking with obstacle avoidance functionality.

Fig. 29
figure 29

Frames of moving pursuing with OA (a-b-c)

Fig. 30
figure 30

Robot path in 2D

7 Conclusion

In this paper, we have presented a visual navigation system for a wheeled mobile robot. The proposed control strategy is based on the application of Horn–Schunck algorithm based optical flow to detect obstacles and objects in the environment of navigation. Two fuzzy logic controllers are used to infer actions for mobile robot motion, the first one drive the robot toward the final goal coordinates by generating the speed and the steering angle. The second FLC is used to steer the robot around the nearest obstacles by the calculation of optical flow values of the two parts of the acquired image. The captured image is treated, resized and subdivided in two parts in order to generate the appropriate control actions.

The proposed approach has been simulated in 3D environment using VRML library. The efficiency is demonstrated using many simulation tests for the different sub-tasks (obstacle avoidance, goal seeking, navigation to goal with obstacle avoidance and pursuing a moving target in cluttered environment). The obtained results confirm the utility and the effectiveness of the designed controllers. They have a good level of performances, where the advantage is the efficiency for robot navigation.

This control strategy can be improved by the use of other visual algorithms and enhancements in order to extract effectively the informations and objects.

As future works, we will focus on the real-time implementation of the studied control strategy. The interest will be given to the use of vision to tracking and intercepting moving objects in cluttered environments. The target and their coordinates will be determined using the vision system.