1 Introduction

The performance level of a service provided by the network to the user is the quality of service (QoS). Most of the multimedia applications have strict QoS requirements which have to be satisfied. Achieving more deterministic network behavior is the main aim of QoS provisioning, where the information from the network can be delivered in a better way and the resources of the network can be efficiently utilized. But, providing QoS solutions and maintaining end-to-end QoS with end user mobility are the major challenges in MANET [2, 11, 22, 37].

QoS routing requires a route which satisfies the end-to-end QoS requirement in terms of bandwidth or delay, but not only to find a route from source to destination. Calculating paths which are suitable for different type of traffic is generated by various applications while maximizing the utilizations of network resources. The following are the major objectives of QoS routing [3, 12, 29, 38]:

  • To find a path from source to destination that satisfies users requirements.

  • To optimize the usage of the network.

  • To reduce the network load when unwanted things like congestion and path breaks appear in the network.

Maximizing the data packet delivery in the fast changing network topology devoid of incurring a large routing overhead is the major issue in MANET. The packet delivery ratio is reduced when broken links are encountered, which cause the data packets to be forwarded to stale or invalid paths.

The main reason for packet loss in ad hoc networks is the link failure or node failure. MAC failure is caused if there is more than one packet using this link after the physical layer failure. Although the routing protocol takes off such packets from the queue after a failure, new packets still keep on coming into the queue. If the new incoming packets are forwarded for the failed link, they will block all other packets, thereby resulting in a network wide low throughput and long delays [5, 7, 16, 35]. Detection of such failures in an ad hoc network becomes challenging due to the lack of a centralized monitoring and management point. This causes the faulty nodes to remain for a long time in the network, which affects the performance of routing in the ad hoc network. For instance, if a defective node participating in the routing process drops data packets, subsequently a large number of packets will be lost [6, 13, 30, 39].

Congestion is a state occurring in some part of a network when the message traffic is so heavy, which slows down network response time. In wired networks, routing is handled by routers in the backbone network. The routers buffer incoming packets before sending them on their paths towards their destinations. Hence, congestion may occur in such networks when the buffers of the multi-port routers start filling up and may end up dropping packets. This problem arises when a router can receive traffic on multiple input ports at a higher rate than that it can forward [4, 28, 40].

In wireless networks, nodes utilize the CSMA/CA channel access technique, and a wireless device acting as a router in a MANET can receive traffic from one neighboring device at a time to reduce the load on the routing process on the device. A device can receive multiple packets before it can forward one packet due to the busy medium, but this is not expected to create a buffering problem at the device [10].

Network congestion is the major problem in the mobile wireless ad hoc networks. Due to limited availability of resources and the nature of the wireless network, congestion is the common issue in MANET. In such networks, due to shared wireless channel and dynamic topology, packet transmissions suffer from interference and fading. When bandwidth is exhausted, large amount of real time traffic is likely to build up and lead to congestion. Congestion causes packet loss, bandwidth degradation, and time and energy wastage. Even if the influence of congestion is reduced, it is not possible to completely overcome the congestion problem. By using some suitable procedures and rules for traffic flow, the influence of congestion can be reduced [23, 34].

TCP congestion control works very well on the Internet. The congestion control mechanisms are affected greatly by some of the unique properties of MANETs. For standard TCP, the widely differing environment in a MANET is highly problematic. Generally, congestion is concentrated on a single router when it occurs in the Internet. On the other hand, due to the shared medium, congestion in MANET affects the whole area [25].

Due to congestion, no single sender is able to collapse the network deliberately due to the comparatively low bandwidth of MANETs. The potentially severe unfairness between traffic flows is due to the effect of a single traffic flow on the network condition. When compared with the conventional wired networks such as Internet, wireless multihop networks are prone to overload-related problems. Hence, it is essential to have an appropriate congestion control protocol for the sake of network stability and acceptable performance [14, 15].

Explicit link failure notification (ELFN) is a cross-layer proposal in which TCP interacts with the routing protocol to detect route failure. ELFN messages are sent back by the routing protocol to the TCP sender from the node detecting the failure. EFLN messages have sender and receiver addresses and ports along with TCP sequence number. Hence, this TCP distinguishes the losses caused by congestion from the ones due to mobility. The fixed retransmission timeout protocol has routing error recovery in addition to the routing algorithm in which route failure is indicated whenever two successive retransmissions due to timeout occur. Hence, the TCP sender retransmits at regular intervals. Adhoc TCP (ATCP) relies on the ICMP protocol and on the explicit congestion notification (ECN) scheme to detect network partition and congestion. When three duplicate ACKs are detected, ATCP puts TCP in “persistent mode” and retransmits the lost packet from buffer. Hence, TCP congestion control is not aroused when it is not really needed. When network congestion is detected by the receipt of an ECN message, ATCP does nothing but forwards the packet to TCP so that it can invoke its normal congestion control mechanism [8, 32, 33].

2 Related work

Peng et al. [18] have presented a multi-rate multicast congestion control scheme for MANETs in order to achieve high fairness with TCP, robustness against misbehaving receivers, and traffic stability, without changing the queuing, scheduling, or forwarding policies of existing networks. This scheme only introduces very limited control traffic overhead by the on-the-spot information collection and rate control.

Karunakaran et al. [9] have presented a cluster based congestion control protocol to support congestion control in ad hoc networks with scalable and distributed cluster-based mechanisms. Their approach is based on the self-organization of the network into clusters. The congestion can be autonomously and proactively monitored by clusters within their localized scope. Adjustment of node rates and cooperation between cluster nodes can be obtained by exchanging small amount of control packets. They have used clustering process to discover the communications between the flows. Their approach can improve the responsiveness of the system than end-to-end techniques.

Rahman et al. [20] have proposed an explicit rate-based congestion control mechanism. In their approach, throughput can be increased rapidly by allowing the routers to give explicit feedback. Using the explicit information in the feedback packets from the routers, sender’s flow can be controlled. Their protocol has better performance than the conservative behavior of the TCP and TCP like protocols for multimedia streaming over the MANETs.

Nishimura et al. [17] have discussed a routing protocol in which they have used multi-agents to control network congestion for MANET. They have engaged two kinds of agents in routing. One of their agents is a routing agent (RA) which gathers information regarding network congestion and link failure. Another agent is a message agent (MA) which uses the information gathered by RA to reach the destination nodes. MAs with respect to the data packets can discover the direction using an evaluation function.

Akyol et al. [1] have proposed a wireless greedy primal dual (wGPD) algorithm for combined congestion control and scheduling in MANET. In wGPD, the scheduling decisions are taken in two phases: intra-node scheduling and inter-node scheduling. In intra-node scheduling, each node decides which packet is to be transmitted whenever it is next allowed to make a transmission. In inter-node scheduling, transmissions that interfere compete among themselves to determine who should transmit the next packet. They have defined two types of congestion control: an unreliable version and a reliable version. The first version is apt for the standard UDP protocol and used for flows tolerating loss. The second version is apt for the TCP protocol and ensured that all data are uniformly delivered to the destination.

From the above analysis of existing works, we can conclude that the works on resolving congestion in an alternate (or) by-pass route is rarely handled.

3 Problem identification and solution

In [27], we have developed an adaptive reliable routing protocol using combined link stability estimation for MANET. The main objective of this protocol is to determine a QoS path for prolonging the network life time and reducing the packet loss. We calculate a combined weight value for a path based on the parameters link expiration time, node remaining energy, node velocity, and received signal strength to predict the link stability or lifetime. When a link is likely to be broken, the previous node will cache the subsequent packets in its data buffer. When a link failure actually occurs, the upstream node with the cached data in its buffer can retransmit it through the next reliable link by using a bypass route.

It can be noticed that when the link failure occurs, the upstream node with the cached data in its buffer can retransmit it through the next reliable link by using a bypass route and a fault tolerance technique. However, the technique of choosing the bypass route and the way to avoid congestion in the bypass route are not handled in [19].

In this paper, we propose a technique for selecting the bypass route to resolve the congestion. When a source node detects congestion on a link along the path, it distributes traffic over alternative paths. The congestion is detected according to the utilization and capacity of link and paths. The distribution of traffic considers the path availability threshold and utilizes a traffic splitting function. If a node cannot resolve the congestion, it signals its neighbors using the congestion indication bit, which is not used in existing congestion control schemes.

4 Routing protocol using combined link stability estimation

4.1 Measuring the signal strength

The received signal strength in cross layer design can be calculated at the physical layer, and it is accessed at the top layers. The procedures at the physical layers have to be modified in order to reassign the measured value of received signal strength to the MAC layer along with the signal [21]. In addition, this value is stored in the routing/neighbour tables and is used in the decision-making process. The received signal strength is passed to the top layers as an interlayer interaction parameter. The received signal strength is used to improve the performance of the network by adjusting the medium access and routing protocols as per the required cross layer design.

The IEEE 802.11 is reliable MAC protocol. Since the received signal strength must reach every exposed node, it assumes a fixed maximum transmission power. When a sending node transmits RTS (Ready to Send) packet, it attaches its transmissions power level. The receiving node measures the signal strength received for free-space propagation model while receiving the RTS packet [21].

$$ P_{R} = P_{T} \left( {\lambda /4\pi d} \right)^{2} G_{T} G_{R} $$
(1)

where λ is the wavelength of carrier, d is distance between sender and receiver, P R and P T are the receiving power and transmitting power, respectively. GT and GR are unity gain of transmitting and receiving omni-directional antennas, respectively.

4.2 Route expiration time (RET)

The RET is the minimum time selected from a set of link expiration times (LETs) designed for the feasible path. LET [19] is the period of link connectivity between two nodes. So, the minimum value of LET is attained in each path and the maximum number of RET, which represents the more reliable routing path, is selected.

$$ RET_{i} = {\text{Min}}\left( {LETs_{i} } \right) $$
(2)

Thus, the RET is the minimum value among LETs of the feasible path.

The principle of LET is to estimate future disconnection time with the help of two neighbors in motion. This can be achieved by using the following method. A global positioning system (GPS) can determine the motion parameters of two neighboring nodes. The following assumptions are made; a free space propagation model whose signal strength solely depends on the distance to the transmitter, and all nodes have their clocks synchronized using the GPS clock. The duration of time can be calculated for the two nodes which remained connected by having knowledge of their motion parameters. These parameters which are obtained from the GPS include speed, direction, and radio range.

Given a prediction T p on the continuously available time for an active link between two nodes at time t0, the availability of this link, L (T p ), is defined as

$$ {\text{L}}\left( {T_{p} } \right) = {\text{P}}\left\{ {{\text{to}}\;{\text{stay}}\;{\text{valid}}\;{\text{to}}\;t_{0} + T_{p} |{\text{Available}}\;{\text{at}}\;t_{0} } \right\} $$

indicating the probability that the link will be continuously available from time t 0 to t 0 + T p .

The calculation of L (T p ) can be divided into two parts: the link availability when the velocities of the two nodes stayed unchanged between t 0 and t 0 + T p , L1 (T p ), and the one for the other cases, L2 (T p ). That is,

$$ L\left( {T_{p} } \right) = L_{1} \left( {T_{p} } \right) + L_{2} \left( {T_{p} } \right) $$
(3)

It is easy to calculate L1 (T p ), which is equal to the probability that the epochs from t 0 onwards for the two nodes are longer than T p , because T p is an accurate prediction if the movements of the two nodes remain unchanged. Since nodes’ movements are independent of each other, and the exponential distribution is ‘memory less,’ L1 (T p ) is given by

$$ L_{1} \left( {T_{p} } \right) = \left[ {1 - E\left( {T_{p} } \right)} \right]^{2} = e^{{ - 2\lambda T_{p} }} $$
(4)

However, it is difficult to give an accurate calculation for L2 (T p ) because of the difficulties in learning about the changes in link status that are caused by changes in a node’s movement.

4.3 Node’s remaining energy

It is assumed that all nodes are equipped with a residual power detection device and know their physical node position. The transmitting energy for a packet can be computed as

$$ Energy_{tx} = \frac{{P_{size} \times Power_{tx} }}{LBW} $$
(5)

where P size is the data packet size, Power tx is the packet transmitting power, and LBW is the wireless link bandwidth. When a mobile node performs power control during packet transmission, the transmitting energy for one packet relative to the node distance is given as

$$ Energy_{tx} = kd^{\alpha } $$
(6)

where k is the proportionality constant, d is the distance between the two neighboring nodes, and α is a parameter that depends on the physical environment (generally between 2 and 4).

For calculating the shorter distance between the transmitter and the receiver, the smaller amount of energy required. At each node, the total required energy is given by

$$ Energy_{tot} = p \times \left( {Energy_{tx} + Energy_{pro} } \right) $$
(7)

where p is the number of packets. The energy required for packet processing (Energy proc ) is much smaller than that required for packet transmitting.

The node remaining energy or the residual energy is the energy left after the packet transmission (i.e.) residual energy Energy res is given by

$$ Energy_{res} = Energy_{initial} - Energy_{tot} $$
(8)

4.4 Node velocity

Consider a node N1 which can always communicate with another node N2 until it reaches position p 3, which is at a distance R from N1, where R is the transmission range of each node.

We also define T p  = Tp 1 p 3, where Tp 1 p 3 is the time taken by node N1 to move from p 1 to p 3.

Knowing T p , the probability that a link is available during the time period of Tp can be calculated. Assume that random walk-based mobility model defines the movement of a node. Every node in the network moves at a constant speed in a constant direction in a time duration referred as mobility epoch. Thus, the epoch length of every node is exponentially disseminated with mean λ −1. The route is evenly distributed over [0, 2π] and the speed is also uniformly distributed in a known range. It is assumed that speed, direction, epoch length, and mobility of nodes are uncorrelated and the links fail autonomously. A conservative prediction of the link being available in the time period of T p is given based on the assumptions above as in [26]

$$ L_{2} \left( {T_{p} } \right) = \frac{{1 - e^{{ - 2\lambda T_{p} }} }}{{2\lambda T_{p} }} + \frac{{\lambda T_{p} e^{{ - 2\lambda T_{p} }} }}{2} $$
(9)

A metric is needed to reflect this aspect and whose value should lie in the range [0, 1] for accounting the reliability of a link while selecting routes. It is impossible to combine the L (T p ) of each link along a path for estimating the path availability, because each link has a different T p .

The term T p  × L (T p ) is used in the place of L (T p ) which is an evaluation of average available time of a link. We can restrict our interest in the estimation within T r , by assuming that estimation will be carried out regularly with period T r . The ratio of T p  × L (T p ) to T r is very much concerned. A new routing metric is developed based on the above reasoning which is referred to as normalized link availability (LA N ). It can be defined as follows [26]:

$$ LA_{N} = \hbox{min} \left[ {\frac{{T_{p} xL_{2} \left( {T_{p} } \right)}}{{T_{r} }},1} \right] $$
(10)

4.5 Combined metric

Finally, the combined metric for finding the link lifetime is given by

$$ CRM = P_{R} + L_{1} \left( {T_{P} } \right) + Energy_{res} + LA_{N} $$
(11)

The node status is adaptively determined based on the value of CRM as given below.

  • Node status is Green, if CRM min < CRM < CRM max

  • Node status is yellow, if CRM = CRM min

  • Node status is red, if CRM < CRM min

where CRMmin and CRMmax are the maximum and minimum combined routing metric values.

Then, Bypass Route Discovery is performed as described in our previous work [27]. When a link is likely to be broken, the previous node will cache the subsequent packets in its data buffer, and then when a link failure occurs, the upstream node with the cached data in its buffer can retransmit it through the next reliable link by using a bypass route.

5 Proposed bypass route technique to resolve congestion

5.1 Multipath construction

Consider the network as a directed graph G = (V, E), where V denotes the set of nodes and E represents the set of links or edges. In our technique, we propose to find multiple paths in terms of shortest and disjoint paths.

5.1.1 Shortest paths

The shortest path discovery algorithm discovers exclusive shortest paths from source S to destination D. Let sP sd be the shortest paths between nodes S and D. Let NS be the neighbor set of node S. Then, the alternative shortest paths can be estimated by using the formula,

$$ sP_{sd}^{s - 1} = \left\{ S \right\} \oplus sP_{s - 1,d} $$
(12)

Where, s − 1 is the neighbor node of S and ⊕ stands for concatenation of two finite sub paths. The shortest path sP s−1,d starts at node S and connects destination D through neighbor node s − 1. However, this technique of finding alternate paths through neighboring nodes may bring in the problem of routing loop.

To ensure the constructed shortest routes are loop-free, sP sd must guarantee that it does not contain S as an intermediate node. Thus, any shortest path must prove the condition given in Algorithm-1.

Algorithm-1

Assumption: S and D as the source and destination nodes, respectively

 sP sd be the set of shortest paths between nodes S and D

1. If (S ∉ nodes (sP s−1,d )) then

2. The alternate path \( sP_{sd}^{s - 1} \) will be loop free

3. Else

4. The alternate path \( sP_{sd}^{s - 1} \) will be routing loop path

5. End if

The computed alternate paths from node S and D are kept in the set A s,d . Paths that are included in set A s,d can be represented as,

$$ {\text{A}}_{s,d} = \left\{ {sP_{sd}^{s - 1} :{\text{s}} - 1 \in {\text{N}}_{\text{s}} ,\;{\text{S}} \notin sP_{sd}^{s - 1} } \right\} $$
(13)

Thus, the set A s,d contains multiple alternate paths that connect S and D.

5.1.2 Disjoint paths

To make multipath routing efficient and consistent, it must satisfy the criteria of disjoint paths. In A s,d , any two paths can be disjoint if and only if it solves the following conditions.

Algorithm-2

Assumption: Let S and D denotes the source and destination nodes respectively

 Let \( sP_{sd}^{s - 1} \) and \( sP_{sd}^{s - 2} \) be two paths in the alternate path set A s,d

1. Find common nodes in \( sP_{sd}^{s - 1} \) and \( sP_{sd}^{s - 2} \)

2. If the only common nodes are S and D then

3. The paths will be disjoint paths

4. Else if it has common nodes other than S and D then

5. The paths will be interconnected paths

6. End if

Thus, disjoint paths between S and D can be given as,

$$ D_{s,d} = \left\{ {sP_{s,d} } \right\} \cup A_{s,d} $$
(14)

5.2 Conjoint-c paths

In this approach, multiple paths are generated by defining a set cP s,d . It includes the paths that contain more conjoint (common) links. Let P1 and P2 be two paths, these paths can be added to the set cP s,d if it satisfies the following condition,

$$ \left| {l\left( {P_{1} } \right) \cap l\left( {P_{2} } \right)} \right| \le c $$
(15)

Where l stands for links and c represents common links. We can define the set cP s,d at various levels,

  • At Conjoint level-0, the conjoint set cP s,d contains disjoint paths that connect S and D

  • At Conjoint level-1, cP s,d encompass paths that has at least a common link

  • At Conjoint level-c, cP s,d set contain paths with c common links

Finally, the conjoint property can be defined as,

$$ cP_{s,d}^{c = 0} \subseteq cP_{s,d}^{c = 1} \subseteq \ldots \subseteq cP_{s,d}^{c = c} \subseteq cP_{s,d} $$
(16)

The algorithm for generating cP s,d set is given below in Algorithm-3

Algorithm-3

1. Initialize cP s,d to sP s,d

2. Paths in A s,d are arranged in ascending order of their path length

3. Further paths for the set cP s,d is selected from the set A s,d

4. While (A s,d  ≠ 0) do

5. Find out shortest path sp in A s,d

6. Remove sp from A s,d

7. Break

8. Check path sp against all paths in A s,d

9. Check whether it satisfies conjoint condition (equation-15)

10. If sp satisfies condition (equation-15) for all paths in cP s,d then

11. Add sp to the set cP s,d

12. End

5.3 Load balancing

By using Algorithm 1, we develop a method to resolve the network congestion by distributing the traffic over group-G multipath routes. Node N calculates the multipath routes to destination D when it detects congestion on a local outgoing link Ln for which the path pND contains link Ln. Some part of the traffic to node D is then transferred to alternative paths p ∈ PG.

Presume that every node (say node N) in the network measures utilization of its local outgoing link Ln. The link utilization is denoted as L Ut and path utilization is symbolized as P Ut . The utilization of path p1 is computed as, [24]

$$ P_{Ut} \left( {p_{1} } \right) = \hbox{min} \left\{ {L_{Ut} \left( {L_{n} } \right):L_{n} \in P^{G} } \right\} $$
(17)

Both path and link utilizations are calculated by each node considering network or routing information forwarded by its neighboring nodes as,

$$ P_{Ut} \left( {p_{s,d} } \right) = \hbox{min} \left\{ {L_{Ut} \left( {L_{s,i} } \right),P_{Ut} \left( {p_{j,d} } \right)} \right\} $$
(18)

If the utilization of a local link exceeds a local congestion threshold T H then the congestion is detected in the local link. Here, T H is a predefined threshold value. The main goal is, by shifting a portion of the traffic to the alternative paths the utilization has to be reduced adequately and this part of traffic as bypass traffic.

A node calculates a set of alternative paths and distributes the bypass traffic over these paths whenever it detects local link congestion or receives an Absolute Congestion Index (ACI) bit from a neighbor. A node produces signals to its neighbors using the ACI bit when a node is not able to solve the congestion in the former method. The following is the procedure for congestion-triggered multipath traffic distribution to resolve congestion at the local link.

  1. 1.

    When node N detects local congestion on outgoing link (l, P UT (L n ) > TH), or receives an ACI = 1 bit from a neighbor node on link l, it tries to move bypass traffic onto alternative paths that avoid this link. The alternative paths are group-G paths determined using Algorithm 1.

  2. 2.

    Let SP = {p ∈ PG: P UT (p) < μ}, where μ is called the path availability threshold.

  3. 3.

    Distribute traffic over the path set SP.

For its active path P a , the node N load balances its current traffic on the entire available path. Let the current load of node N be 100 kbit/s. Let P1, P2, P3, and P4 be the available paths in SP. Node N will use the multiple paths P1, P2, P3 and P4 in addition to its primary path Pa on reception of Explicit Congestion Indication (ECI) and will send on each of these paths with equal amount of data rate (i.e.) 100/4 = 25 kbit/s of data.

6 Simulation results

6.1 Simulation model and parameters

We use NS2 [31] to simulate our proposed technique. In our simulation, the channel capacity of mobile hosts is set to the same value: 2 Mbps. We use the distributed coordination function (DCF) of IEEE 802.11 for wireless LANs as the MAC layer protocol. It has the functionality to notify the network layer about link breakage.

In our simulation, we keep the number of mobile nodes as 150. The mobile nodes move in a 1000 m2 × 1000 m2 region for 50 s simulation time. We assume each node moves independently with the same average speed. All nodes have the same transmission range of 250 m. In our simulation, the speed is 10 m/s. The simulated traffic is Constant Bit Rate (CBR). The CBR flows are varied as 2, 4, 6 and 8.

Our simulation settings and parameters are summarized in Table 1.

Table 1 Simulation parameters

6.2 Performance metrics

We evaluate mainly the performance according to the following metrics.

  • Average end-to-end delay The end-to-end-delay is averaged over all surviving data packets from the sources to the destinations.

  • Average packet delivery ratio It is the ratio of the number of packets received successfully and the total number of packets transmitted.

  • Drop It is the average number of packets dropped.

  • Throughput Throughput is defined as the total number of routing control packets received successfully during the transmission.

  • Overhead It is the number of control packets exchanged during the entire transmission of data packets.

The simulation results are presented in the next section. For comparison, the Hop-by-Hop [36] technique is implemented in NS-2 using the same simulation settings described above. Then, we compare the proposed adaptive reliable congestion control routing protocol (ARCCRP) with Hop-by-Hop technique and the standard TORA protocol.

6.3 Results

6.3.1 Based on nodes

In this experiment, we vary the network size by increasing the number of nodes as 50, 75, 100, 125 and 150. We keep the number of flows as 8 and data rate as 250 kb.

From Fig. 1 we can see that when we increase the number of nodes, the delay is increased linearly. It can be observed that TORA has the highest delay followed by Hop-by-Hop. ARCCRP has significantly less delay than the Hop-by-Hop and TORA.

Fig. 1
figure 1

Delay versus nodes

When we increase the number of nodes, the number of packets dropped increases and hence the packet delivery ratio decreases slightly. From Fig. 2, we can see that ARCCRP has high delivery ratio followed by Hop-by-Hop, while TORA has the least delivery ratio. Figure 3 shows that the packet drop is less in ARCCRP followed by Hop-by-Hop, whereas TORA has the highest packet drop.

Fig. 2
figure 2

Delivery ratio versus nodes

Fig. 3
figure 3

Packet drop versus nodes

When the number of nodes is increased, the control packets exchanged will increase resulting in the increased overhead. But, from Fig. 4, we can see that, ARCCRP has slightly less overhead than the existing Hop-by-Hop protocol.

Fig. 4
figure 4

Overhead versus nodes

6.3.2 Based on flows

In this experiment, the number of CBR connections or traffic flows is varied from 2 to 8 for 150 nodes network to increase the amount traffic in the network. The traffic rate is kept as 250 kb.

When we increase the number of flows, it results in congestion. Hence, the delay and packet drop will increase, whereas the throughput and packet delivery ratio will decrease.

From Fig. 5, we can see that when we increase the number of flows, ARCCRP has less delay than Hop-by-Hop and TORA protocols.

Fig. 5
figure 5

Delay versus flows

From Fig. 6, we can see that when we increase the number of flows, ARCCRP has high delivery ratio when compared to Hop-by-Hop and TORA protocols. Figure 7 shows that ARCCRP has lower drop than Hop-by-Hop and TORA. In Fig. 8, we can see that ARCCRP has the highest throughput followed by Hop-by-Hop and TORA.

Fig. 6
figure 6

Delivery ratio versus flows

Fig. 7
figure 7

Packet drop versus flows

Fig. 8
figure 8

Throughput versus flows

6.3.3 Based on rate

In this third experiment, the CBR traffic rate is varied from 100 to 250 kb to increase the amount traffic in the network, for 8 flows.

When we increase the transmission rate, it results in congestion. Hence, the delay and packet drop will increase, whereas the throughput and packet delivery ratio will decrease.

From Fig. 9 we can see that when we increase the rate, ARCCRP has less delay than Hop-by-Hop and TORA protocols.

Fig. 9
figure 9

Delay versus rate

From Fig. 10 we can see that when we increase the rate, ARCCRP has high delivery ratio when compared to Hop-by-Hop and TORA protocols. Figure 11 shows that ARCCRP has lower drop than Hop-by-Hop and TORA. In Fig. 12, we can see that ARCCRP has the highest throughput followed by Hop-by-Hop and TORA.

Fig. 10
figure 10

Delivery ratio versus rate

Fig. 11
figure 11

Packet drop versus rate

Fig. 12
figure 12

Throughput versus rate

When the rate is increased, the control packets exchanged will increase resulting in the increased overhead. From Fig. 13, we can see that ARCCRP has slightly less overhead than the existing Hop-by-Hop protocol.

Fig. 13
figure 13

Overhead versus rate

6.3.4 Based on pause time

In this experiment, we vary the pause time as 5, 10, 15, 20, and 25 with number of nodes as 50, rate as 250 kb, and number of flows as 8.

When we increase the pause time, mobile disconnections will be lower, thereby reducing route failures. So, increased throughput and packet delivery ratio with reduced packet drops can be achieved. The delay is also decreasing since retransmissions are reduced.

From Fig. 14, we can see that than ARCCRP has less delay than Hop-by-Hop and TORA protocols. From Fig. 15, we can see that when we increase the pause time, ARCCRP has high delivery ratio when compared to Hop-by-Hop and TORA protocols. Figure 16 shows that ARCCRP has lower drop than Hop-by-Hop and TORA. In Fig. 17, we can see that ARCCRP has the highest throughput followed by Hop-by-Hop and TORA.

Fig. 14
figure 14

Delay versus pause time

Fig. 15
figure 15

Delivery ratio versus pause time

Fig. 16
figure 16

Packet drop versus pause time

Fig. 17
figure 17

Throughput versus pause time

7 Conclusion

In this paper, we have proposed an enhanced technique to resolve congestion using bypass route selection in MANETs. When a node detects congestion on a local outgoing link L, it calculates the multipath routes to destinations for which the path contains link L. Some portion of the traffic to node is then shifted to alternative paths. Congestion is detected on a local link if its utilization exceeds a local congestion threshold TH. The objective is to minimize the utilization to a more acceptable level by shifting a portion of the traffic to the alternative paths and this part of traffic as bypass traffic. A node calculates a set of alternative paths and distributes the bypass traffic over these paths whenever it detects local link congestion or receives an Explicit Congestion Indication (ECI) bit from a neighbor. A node produces signals to its neighbors using the ECI bit. By simulation results, we have shown that the proposed technique is reliable and achieves more throughput with reduced packet drop. As a future work, we will compare the proposed technique with more existing works and present more detailed analysis.