1 Introduction

In recent years, a lot of research is going on around the world to find a suitable navigation method for mobile robot without human interaction in static and dynamic environments. One of the major challenges of the autonomous navigation for mobile robots is the detection and obstacles avoidance during the robot navigation task. This problem can be solved by relating different methods or algorithms in order to attain best results.

A biologically inspired neural network in [34] is used to design for real-time navigation with obstacle avoidances of a mobile robot and a multi-joint robot manipulator in a nonstationary environment. Many researcher studies used soft computing algorithm to navigate the mobile robot in different environments. A collision-free path is constructed for moving robot among obstacles based on two neural networks [8]. The first neural network is used to determine the free space using sensor, and second is used to find safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles.

A reactive immune network (RIN) is designed in [12] for mobile robot navigation in unknown environments. In addition, an adaptive virtual target method is integrated to solve the local minima problem in navigation. In [25], an artificial neural network controller is designed for an autonomous mobile robot using a multilayer feed forward neural network, which enables the robot to navigate in a real-world dynamic environment. Using a technique based on utilization of neural networks and reinforcement learning in [27], a mobile robot learn to generate efficient navigation rules automatically without initial settings of rules by experts and constructed environments on its own.

In [7], three neural controllers are developed for robot navigation in urban environments. The first and second controllers of all the input units are fully connected with the hidden units. In the third controller, the hidden units are divided into three groups. The results show that neural controller with separated hidden neurons has a fast response to sensory input.

A neural network-based camera calibration method is presented in [4] for the global localization of mobile robots with monocular vision. The camera is calibrated using the neural network-based method, and monocular vision is used to initialize and recalibrate the robot position. In [2], they have surveyed the mobile robot navigation using neural networks and developments in the last few years of the neural networks with applications to mobile robot navigation. In [36], they have developed an algorithm called escaping algorithm (EA) which does not need any pre-knowledge information about dynamic obstacles. It used Kalman filter to predict the motion of dynamic objects and combines them with potential field approach to navigate safely in dynamic environment. An online navigation technique in [10] for a wheeled mobile robot (WMR) is developed in an unknown dynamic environment using fuzzy logic techniques. Tracking fuzzy logic control (TFLC) and obstacle avoidance fuzzy logic control (OAFLC) are combined to move the mobile robot to the target along a collision-free path. The algorithm starts with the former one, and if sensors detect any obstacles in the front of the robot, then the algorithm switches over to the later one. But time consumption is more, since it always used fuzzy computation if there is free path.

In [17], they have presented two navigation algorithms for wheeled mobile robot in unknown environments. The first algorithm based on geometrical-based technique determines the set of all possible collision-free steering angles and selects the steering angle that is nearest to the target with widest gap. The second algorithm based on neural network-based technique generates an optimized path by using the objective function which minimizes the traveled distance to the goal position while avoiding obstacles.

A hybrid path planning of mobile robot in cluttered environments is presented in [21] and [37]. An adaptive neuro-fuzzy inference system is constructed for path planning of mobile robot in [15, 24, 29] and [31] for different environments.

In [26, 33] and [5], they have proposed the mobile robot navigation technique using fuzzy logic controllers in static and dynamic environments. The overall concepts are same, but the range of the memberships function is different.

An electrostatic potential filed methods are presented in [11, 14, 18] and [38] for navigation of a group of robot and single robot in structured and unstructured environments. They have compared with the other existing methods for navigation of mobile robot. But still an improvement is required for dynamic environments with respect to time and path length.

An hybrid optimal method of mobile robot navigation is developed by using fuzzy logic and genetic algorithm in [6] in order to optimize the path length using genetic algorithm which is generated by the fuzzy system.

Improved fuzzy logic techniques are developed in [3, 13, 16, 19, 22] and [30] for static and highly cluttered environments. But still there is a major drawback in dynamic environments.

A comparative survey is presented in [1] and [35] for different soft computing methods and heuristic methods in robot path planning. In different methods, fuzzy logic and neural network-based methods are more advantages than the other heuristic methods.

As the above existing methods have more computational time and path length, we introduce a method for generating a collision-free, near-optimal path and speed for a mobile robot in a dynamic environment containing moving and static obstacles using artificial neural network. Multilayer perceptron (MLP) is used to choose a collision-free segment from a set of five segments, and the network also controls the speed of the robot.

The paper is organized as follows: In Sect. 2, we provide a preliminaries to prepare for the rest of work. Section 3 gives the motion planning algorithm of a mobile robot. Experimental results, evaluation and discussion and comparison are presented in Sects. 4, 5 and 6, respectively. Finally, conclusions are given out in Sect. 7.

2 Preliminaries

2.1 Problem definition

The formal definition of our motion planning problem is as follows. Let R be a robot moving in a two-dimensional workspace W. The configuration of an arbitrary object is a specification of the position of every point on it relative to a fixed reference frame. Let FR and FW be Cartesian frame embedded in R and W, respectively. So, a configuration of R is a specification of the position (xy) and orientation of FA with respect to FW. We use the notation R(x) to refer to robot R configured at x in W. Let there be n moving obstacles \(O_1\), \(O_2,\ldots ,O_n\) which state at time t is denoted by O(t). All the obstacles have a maximal velocity given by \(v_1\), \(v_2,\ldots ,v_n\). The robot has a maximal velocity V which is larger than each of the maximal velocities of the obstacles. We do not assume any knowledge of the velocities and directions of motion of the moving obstacles, other than that they have a maximal velocity. Let \(\Omega \) be a list of adjacent points that will represent the path of the robot. Let s, \(g\in \Omega \) be the start and goal configuration of the robot, respectively, and let \(t_0\) be the start time. The task is to compute a path \(\Pi :[t_0,T]\rightarrow \Omega \), such that \(\Pi (t_0)= s\) and \(\Pi (T)= g\), and there is no collision with the moving obstacles and the robot, i.e., \(\forall (t\in [t_0,T] {:}{:} R(\Pi (t)) \cap O(t) = \varphi \)). T is the arrival time of the robot R at its goal configuration.

2.2 Artificial neural networks

Our main objective is to find a collision-free path of a robot system from a given initial position to some goal position. The environment is a two-dimensional space with obstacles. The neural network that we used is the multilayer perceptron (MLP). The structure of the neural network is given in Fig. 1. The network contains three layers: input, hidden and output. The layers are connected by synaptic weights. The learning of the network is realized by back-propagation (BP) algorithm. The BP algorithm is based on the error-correction principle. The parameters that are used during the learning process are given:

  1. i.

    \(x_i\) : The ith input

  2. ii.

    \(y_j\) : The output of the jth hidden neuron

  3. iii.

    \(O_k\) : The output of the kth output neuron

  4. iv.

    \(d_k\) : The desired output

  5. v.

    \(V_{ji}\) : The weight from the ith input to the jth hidden neuron

  6. vi.

    \(W_{kj}\) : The weight from the jth hidden neuron to the kth output neuron

  7. vii.

    \(\eta \) : The learning rate

Fig. 1
figure 1

Structure of MLP network

There are I inputs, J hidden neurons and K output neurons. The weights of the hidden layer are updated using the equation:

$$\begin{aligned} V_{ji}(n+1) = V_{ji}(n)+ \eta \times \delta _j(n)\times x_i(n) \end{aligned}$$
(1)

\(\delta _j\) is the error signal produced by the jth hidden neuron.

The weights of the output layer are updated using the equation:

$$\begin{aligned} W_{kj}(n+1) = W_{kj}(n)+ \eta \times \delta _k(n)\times y_j(n) \end{aligned}$$
(2)

\(\delta _k\) is the error signal produced by the kth output neuron.

$$\begin{aligned} \delta _k(n+1) = (d_k-O_k) \times (1-O_k) \times O_k \end{aligned}$$
(3)

Using \(\delta _k\), we can calculate \(\delta _j\) as follows:

$$\begin{aligned} \delta _j = (1-y_j) \times y_j \times \varSigma ^{k}_{k=0} \delta _k \times W_{jk} \end{aligned}$$
(4)

So, for any input, we find the output of the hidden neurons and then the output of the output layer neurons. The outputs in each layer are computed using the sigmoid function. The weights of each hidden and output layer neurons are updated using the above equations. The error signals of the hidden neurons are back-propagated from the output layer to the hidden layer. This process is repeated for the next input–output pattern and so on until the error is below a pre-specified threshold. We used the minimization of the squared error cost function:

$$\begin{aligned} E = \frac{1}{2} \varSigma ^k_{k=0} (d_k-O_k)^2 \end{aligned}$$
(5)

3 The motion planning algorithm

Fig. 2
figure 2

Robot line of sight segment to destination

The main idea of the proposed model is to generate a collision-free, near-optimal path and speed for a mobile robot in a dynamic environment containing moving and static obstacles using artificial neural network. For each robot motion, the workspace is divided into 5 segments each of 30 degree. As in Fig. 2, the robot at positions I and G is the global destination position. 2I3, 1I2, 4I1, 5I4 and 6I5 are the five segments. I0 is perpendicular to global destination position G. Robot will choose a collision-free segment and will move ahead a distance of D=20 with a speed controlled by MLP neural network. The collision-free segment will also be computed by MLP.

Fig. 3
figure 3

Robot segment with obstacles

Table 1 Connection weights in \(W_{kj}\)
Table 2 Connection weights in \(V_{ji}\)

As in Fig. 3, the robot is at position I. O1, O2 and O4 are static obstacles. O3 and O5 are moving obstacles.

Definition 1

(Critical obstacle)

The obstacle reaching a segment taking minimum time than other moving obstacles is chosen as the critical obstacle for the segment for that motion.

Input to the MLP network is the time taken by critical obstacle to reach its segment if the segment is not blocked by static or moving obstacles at that time t1. If blocked, then input value of that segment is 0.

Output of the network is the time taken by the robot to reach its local destination if the segment is not blocked by static or moving obstacles at that time t1. If blocked, then output value of that segment is 0.

3.1 Input to MLP

The size of each input samples/patterns is 5 because of 5 numbers of segments. Each five value from left to right in the input relates to the 5 segments starting from left to right according to current robot position. The input is coded as follows:

  1. i.

    A value of 0 will mean that the particular segment is occupied by static or moving obstacles at time t1 when the robot is at a particular position.

  2. ii.

    Any value greater than 0 in the input will be the time taken by the critical obstacle to reach its segment.

  3. iii.

    If the segment is not blocked by static or moving obstacles and there is no critical obstacle for that segment, then the input value will be 1.5.

3.2 Output from MLP

The size of each output patterns is 5 because of 5 numbers of segments and input size. The input–output mapping is as follows:

  1. i.

    If the input value is 0 which means block by obstacle, then the corresponding output value is 0 which means robot motion is not possible in that segment.

  2. ii.

    If the input value is from 0.1 to 0.5, then the corresponding output value is 0 which means robot motion is not possible in that segment because of moving obstacle.

  3. iii.

    If the input value is from 0.6 to 1.0, then the corresponding output value is 0.5 which means the robot will take time = 0.5 to reach the local destination in that particular segment taking the middle path of the segment to avoid the moving obstacle.

  4. iv.

    If the input value is greater than 1.0, then the corresponding output value will be 1 which means the robot will take time = 1.0 to reach the local destination in that particular segment taking the middle path of the segment to avoid the moving obstacle is exist.

The neural network will learn a set of target outputs (desired outputs) for a given set of input patterns. The output at each layer i.e. hidden and output layers is determined using the sigmoid function.

During training, the difference between the actual output and the desired output is minimized by adjusting the connecting weights using Eqs. (1) and (2).

Finally in testing/real operation, the robot chooses a segment and a speed to move ahead toward the goal according to the output of the network. The MLP will be operated for each robot motion to move a distance of D = 20 to reach its local destination.

If at least one of the network outputs more than 0.5 or 1.0 or both (meaning more than one segment is safe to move ahead), then the robot will choose the segment which is nearest to the global destination position.

4 Simulation results

Fig. 4
figure 4

Initial positions and segments for the first motion

In our application, the size of the robot is taken as \(4\times 3\) square unit (pixel). The results were obtained from a machine with Intel(R) Core(TM) i5 CPU with 3.20 GHz–3.33 GHz, 4 GB RAM, running Microsoft Visual C++ 2005 with OpenGL under Windows 7 Ultimate, 32-bit operating system.

Tables 1 and 2 show the connection weights in \(W_{kj}\) and \(V_{ji}\), respectively. Figures 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18 and 19 show the robot motion in the same dynamic environment (first environment), where I is the initial position of the robot and G is the goal position. The black rectangular boxes represent the obstacles. Obstacles labeled with O1, O2, O3 and O4 are moving obstacles, and unlabeled ones are static obstacles. The x- and y-axes of the robot workspace range from \(-\) 120 to 120. The horizontal axis is the x-axis, and the vertical one is the y-axis. Similarly, Figs. 2026 show robot motion in another dynamic environment (second environment).

Fig. 5
figure 5

First motion and segments for second motion

Fig. 6
figure 6

Second motion and segments for third motion

Fig. 7
figure 7

Third motion and segments for fourth motion

Fig. 8
figure 8

Fourth motion and segments for fifth motion

Fig. 9
figure 9

Fifth motion and segments for sixth motion

Fig. 10
figure 10

Sixth motion and segments for seventh motion

Fig. 11
figure 11

Seventh motion and segments for eighth motion

Fig. 12
figure 12

Eighth motion and segments for ninth motion

Fig. 13
figure 13

Ninth motion and segments for tenth motion

Fig. 14
figure 14

Tenth motion and segments for eleventh motion

Fig. 15
figure 15

Eleventh motion and segments for twelfth motion

Fig. 16
figure 16

Twelfth motion and segments for thirteenth motion

Fig. 17
figure 17

Thirteenth motion and segments for fourteenth motion

Fig. 18
figure 18

Robot reaching the goal position

Fig. 19
figure 19

Robot reaching the goal position without unused paths

5 Evaluation and discussion

In our MLP network, the size of each input neuron \(I=6\) input (including \(-\) 1 the dummy input), number of hidden neurons \(J=25\) and the number of output neurons \(K=5\).

5.1 Computation of path in the first environment

In Figs. 419, the initial position I of the robot is \(x=-100\) and \(y=-90\) and the goal position G of the robot is \(x=90\), \(y=80\).

O1 is at the initial position with x-axis parameter \(x1=-59\), \(x2=-65\) and y-axis parameter \(y1=-15\), \(y2=-8\). O2 is at the initial position with x-axis parameter \(x1=-90\), \(x2=-82\) and y-axis parameter \(y1=-57\), \(y2=-45\). O3 is at the initial position with x-axis parameter \(x1=-10\), \(x2=-4\) and y-axis parameter \(y1=-70\), \(y2=-59\). O4 is at the initial position with x-axis parameter \(x1=23\), \(x2=31\) and y-axis parameter \(y1=-52\), \(y2=-45\).

O1 covers a distance of 5 units in the positive x direction and 3 units in negative y direction in 1 s. O2 moves a distance of 5 units in positive x direction and 4 units in negative y direction in 1 s. O3 moves a distance of 1 unit in negative x direction and 5 units in positive y direction in 1 s. O4 moves a distance of 3 units in negative x direction and 1 unit in negative y direction in 1 s.

Left most segment, we call it Segment1, and next as Segment2 till the right most one as Segment5.

Fig. 20
figure 20

Initial positions and segments for the first motion

Fig. 21
figure 21

First motion and segments for second motion

Fig. 22
figure 22

Second motion and segments for third motion

Fig. 23
figure 23

Third motion and segments for fourth motion

Fig. 24
figure 24

Fourth motion and segments for fifth motion

Fig. 25
figure 25

Robot reaching the goal position

Fig. 26
figure 26

Robot reaching the goal position without unused paths

  1. i.

    Initial position, I of the mobile robot and moving obstacles is shown in Fig. 4. The segments for the first motion are also shown in the figure.

  2. ii.

    In Fig. 5, the robot chooses Segment3 for the first motion. There is no critical obstacle for all the segments.

    The input to the MLP network for this motion is

    $$\begin{aligned} \big \{1.5, 1.5, 1.5, 1.5, 1.5\big \} \end{aligned}$$

    The output is

    $$\begin{aligned} \big \{0.999567, 0.999999, 0.999984, 1.000000, 0.999996\big \} \end{aligned}$$

    The segments for second motion are also shown in the figure.

  3. iii.

    In Fig. 6, the robot chooses Segment4 for the second motion. O2 is the critical obstacle for Segment3, and it reaches the segment in 0.4 s for the second motion. There is no critical obstacle for other segments. Segment1 and Segment2 are blocked by O2 just before the second motion. The input to the MLP network for this motion is

    $$\begin{aligned} \big \{0, 0, 0.4, 1.5, 1.5\big \} \end{aligned}$$

    The output is

    $$\begin{aligned} \big \{0.000000, 0.000247, 0.000002, 0.999988, 0.999989\big \} \end{aligned}$$

    The segments for third motion are also shown in the figure.

  4. iv.

    In Fig. 7, the robot chooses Segment5 for the third motion. There is no critical obstacle for all the segments. Segment1 is blocked by O2 just before the third motion. Segment2, Segment3 and Segment4 are blocked by static obstacle. The input to the MLP network for this motion is

    $$\begin{aligned} \big \{0, 0, 0, 0, 1.5\big \} \end{aligned}$$

    The output is

    $$\begin{aligned} \big \{0.000000, 0.000061, 0.000000, 0.000000, 1.000000\big \} \end{aligned}$$

    The segments for fourth motion are also shown in the figure.

  5. v.

    In Fig. 8, the robot chooses Segment3 for the fourth motion. There is no critical obstacle for all the segments. Segment1 and Segment2 are blocked by static obstacle. The input to the MLP network for this motion is

    $$\begin{aligned} \big \{0, 0, 1.5, 1.5, 1.5\big \} \end{aligned}$$

    The output is

    $$\begin{aligned} \big \{0.000000, 0.000246, 1.000000, 0.999988, 0.999989\big \} \end{aligned}$$

    The segments for fifth motion are also shown in the figure.

  6. vi.

    In Fig. 9, the robot chooses Segment3 for the fifth motion. There is no critical obstacle for all the segments, and no segment is blocked by obstacle. The input to the MLP network for this motion is

    $$\begin{aligned} \big \{1.5, 1.5, 1.5, 1.5, 1.5\big \} \end{aligned}$$

    The output is

    $$\begin{aligned} \big \{0.999567, 0.999999, 0.999984, 1.000000, 0.999996\big \} \end{aligned}$$

    The segments for sixth motion are also shown in the figure.

  7. vii.

    In Fig. 10, the robot chooses Segment5 for the sixth motion. O1 is the critical obstacle for Segment1, and it reaches the segment in 0.3 s for the second motion. There is no critical obstacle for other segments. Segment3 and Segment4 are blocked by O2 just before sixth motion, and Segment2 is blocked by static obstacle. The input to the MLP network for this motion is

    $$\begin{aligned} \big \{0.3, 0, 0, 0, 1.5\big \} \end{aligned}$$

    The output is

    $$\begin{aligned} \big \{0.000000, 0.000054, 0.000000, 0.000000, 1.000000\big \} \end{aligned}$$

    The segments for seventh motion are also shown in the figure.

  8. viii.

    In Fig. 11, the robot chooses Segment3 for the seventh motion. There is no critical obstacle for other segments. Segment1 and Segment5 are blocked by O3 and O4, respectively, just before seventh motion. The input to the MLP network for this motion is

    $$\begin{aligned} \big \{0, 1.5, 1.5, 1.5, 0\big \} \end{aligned}$$

    The output is

    $$\begin{aligned} \big \{0.000000, 1.000000, 1.000000, 1.000000, 0.000000\big \} \end{aligned}$$

    The segments for seventh motion are also shown in the figure.

  9. ix.

    In Fig. 17, the robot chooses Segment3 for the thirteenth motion. There is no critical obstacle for all the segments. The input to the MLP network for this motion is

    $$\begin{aligned} \big \{1.5, 1.5, 1.5, 1.5, 1.5\big \} \end{aligned}$$

    The output is

    $$\begin{aligned} \big \{0.999567, 0.999999, 0.999984, 1.000000, 0.999996\big \} \end{aligned}$$

    The segments for the fourteenth motion are also shown in the figure.

  10. x.

    In Fig. 18, the robot reaches the global destination.

Fig. 27
figure 27

Robot reaching the goal position for GA-Fuzzy in [32]

Fig. 28
figure 28

Robot reaching the goal position for Fuzzy-WDO in [28]

Fig. 29
figure 29

Robot reaching the goal position for potential field method in [38]

5.2 Computation of path in the second environment

In Figs. 2026, the initial position I of the robot is \(x=-90\), \(y=-70\), and the goal position G of the robot is \(x=30\), \(y=70\).

O1 is at the initial position with x-axis parameter \(x1=-35\), \(x2=-40\) and y-axis parameter \(y1=4\), \(y2=-6\). O2 is at the initial position with x-axis parameter \(x1=-67\), \(x2=-73\) and y-axis parameter \(y1=-15\), \(y2=-8\). O3 is at the initial position with x-axis parameter \(x1=-94\), \(x2=-86\) and y-axis parameter \(y1=-57\), \(y2=-45\).

O1 covers a distance of 3 units in the negative x direction and 4 units in positive y direction in 1 s. O2 moves a distance of 3 units in positive x direction and 3 units in negative y direction in 1 s. O3 moves a distance of 4 unit in positive x direction in 1 s.

Table 3 Comparison of path length and computational time for the second environment
  1. i.

    Initial position, I of the mobile robot and moving obstacles is shown in Fig. 20. The segments for the first motion are also shown in the figure.

  2. ii.

    In Fig. 21, the robot chooses Segment3 for the first motion. O3 is the critical obstacle for Segment3, and it reaches the segment in 0.4 s for the first motion. There is no critical obstacle for other segments. Segment4 is blocked by static obstacle.

    The input to the MLP network for this motion is

    $$\begin{aligned} \big \{0,0,0.7,0,1.5\big \} \end{aligned}$$

    The output is

    $$\begin{aligned} \big \{0.000000, 0.000061, 0.460933, 0.000000, 1.000000\big \} \end{aligned}$$

    The segments for second motion are also shown in the figure.

  3. iii.

    In Fig. 22, the robot chooses Segment2 for the second motion. O3 is the critical obstacle for Segment2, and it reaches the segment in 0.4 s for the second motion. There is no critical obstacle for other segments. Segment1 is blocked by O3 just before the second motion, and Segment3, Segment4 and Segment5 are blocked by static obstacle. The input to the MLP network for this motion is

    $$\begin{aligned} \big \{0,1.2,0,0,0\big \} \end{aligned}$$

    The output is

    $$\begin{aligned} \big \{0.000000, 0.999996, 0.000000, 0.000000, 0.000001\big \} \end{aligned}$$

    The segments for third motion are also shown in the figure.

  4. iv.

    In Fig. 23, the robot chooses Segment4 for the third motion. There is no critical obstacle for all the segments. Segment2 and Segment3 are blocked by O2 just before the third motion. The input to the MLP network for this motion is

    $$\begin{aligned} \big \{1.5,0,0,1.5,1.5\big \} \end{aligned}$$

    The output is

    $$\begin{aligned} \big \{0.999972, 0.000002, 0.000000, 1.000000, 1.000000\big \} \end{aligned}$$

    The segments for fourth motion are also shown in the figure.

  5. v.

    In Fig. 24, the robot chooses Segment3 for the fourth motion. O2 is the critical obstacle for Segment2, and it reaches the segment in 0.3 s for the fourth motion. Segment1 is blocked by O2 just before fourth motion. The input to the MLP network for this motion is

    $$\begin{aligned} \big \{0,0.9,1.5,1.5,1.5\big \} \end{aligned}$$

    The output is

    $$\begin{aligned} \big \{0.000000, 0.534637, 1.000000, 0.999985, 0.999983\big \} \end{aligned}$$

    The segments for fifth motion are also shown in the figure.

  6. vi.

    In Fig. 25, the robot reaches the global destination.

6 Comparison

The diagrammatic path length of our method is shown in Fig. 26, and for other methods, namely GA-Fuzzy in [32], Fuzzy-WDO in [28] and potential field method in [38] are shown in Figs. 27, 28 and 29, respectively. Table 3 shows the data of comparison of path length and computational time of our proposed method with other two standard methods in the same second environment. It shows that the proposed method gives optimal path with less computational time as compared to GA-Fuzzy in [32], Fuzzy-WDO in [28] and potential field method in [38].

7 Conclusion

In this paper, we have described a new method for mobile robot navigation in dynamic environment containing static and moving obstacles using artificial neural network. Multilayer perceptron (MLP) neural network is used to choose a collision-free segment from a set of five segments, and the network also controls the speed of the robot for each motion of robot. For each motion, the MLP computes the time taken by the robot to reach its local destination for each collision-free segment using the time taken by the critical obstacles. Finally, the robot chooses a segment and path which is nearest to the global destination. Simulation results show that the neural network gives optimal path with less computational time as compared to other standard approaches. In the future work, we will consider for automated driving system studying its performance and development of autonomous systems like autonomous soldier robot, UAV (unmanned aerial vehicle) and hill-climbing robot.