Keywords

1 Introduction

According to published from Ericsson paper [12], suggests that by 2020 it is expected a growth of LTE subscriptions up to 3.7 billion. In February 2015 Cisco System published, in turn, prognosis for the period 2014–2019 [7], where the Global mobile data traffic will grow three times faster than global fixed IP traffic from 2014 to 2019. The Mobile data traffic will grow 10-fold from 2014 to 2019, a compound annual growth rate of 57%, and it will reach an annual run rate of 291.8 Exabytes by 2019, up from 30.3 Exabytes in 2014. In the middle of 2016 the European Commission in [13] presents coordinated designation and authorization of the 700 MHz band for wireless broadband by 2020 and coordinated designation of the sub-700 MHz band for flexible use which safeguards the provision of audiovisual media services to mass audience, as well as investments into more efficient technologies, which are needed in order to vacate the current use of the 700 MHz band by digital terrestrial television. The prognosis of these estimates require the search of optimal 4G solutions in terms of QoS offered by telecom providers.

Requirements of IMT-Advanced [6] for 1 Gbit/s speeds for fixed and 100 Mbit/s for mobile users present two challenges for providers of wireless services:

  • Optimization of the dynamic selection of the best interfaces of multi-interface devices according to user requirements and limitations in the models of devices such as power consumption, user fees, and application specific requirements for QoS (delay, latency and throughput);

  • Scalability of the work of billions of devices on the wireless network.

Up to 2016 under these requirements, providers of 4G services choose between two advanced wireless technologies - LTE [9] or WiMAX [11], but since 2016 widely used 4G technology is LTE and many vendors implement only this technology in their end devices. The beginning of hybrid schemes supporting WiMAX/LTE is placed in 2010 from companies such as Huawei, Vodafone, KDDI, UQ Communication and others. But in nowadays companies such as Vodafone, Verizon, China Mobile, AT & T, Nokia and Ericsson stake in their equipment of LTE.

Solutions are needed to improve QoS in terms of delays in larger loads and any packet loss. For communications to be successful, it is also essential to focus on network traffic prioritizes for different types of communication streams.

2 Essence of Traffic Management by Priority in LTE Networks

Even with the developing of LTE technology the QoS for uplink is discussed by many authors [3,4,5, 8, 10]. After the first implementations of LTE the focus in the allocation of resources is shifting towards to the profit maximization and user satisfaction [2].

In 3GPP, the QoS Class Indicator (QCI) consist of basic classes, which are defined as “default”, “expedited forwarding”, and “assured forwarding”. It means: expedited forwarding is used for ‘strict’ priority (video and voice), and ‘assured forwarding’ is used for business differentiation (e.g., weighted-fair priority).

In LTE network QoS is between end-user devices and Packet Data Network (PDN) Gateway applying the ‘bearers’. ‘Bearers’ is a set of network configurations to provide a special handling of traffic to its set prioritization. Their hierarchy is presented in Table 1. Default bearer is established when the user equipment (UE) connects to the LTE network, while Dedicated bearer is established whenever must be set QoS for a specific traffic type (service) as VoIP, video and etc.

Table 1. Hierarhy of LTE QoS

GBR (Guaranteed Bit Rate) provides guaranteed bandwidth and monitors two parameters in directions uplink and downlink:

  • GBR- minimum GBR for EPS (Evolved Packet switched System) bearer,

  • MBR- maximum GBR for EPS bearer,

  • Non-GBR bearer does not provide guaranteed bandwidth and also monitors two parameters in directions uplink and downlink: A-AMBR-general maximum speed permitted for the entire non-GBR throughput for specific APN (Access Point Name) and UE -AMBR- overall maximum speed permitted for the entire non-GBR throughput for all of APN particularly UE.

ARP (Allocation and Retention Priority) is used to decide whether the distribution of resources is to be modified according to the new bearer or to maintain the current distribution of the resource.

TFT (Traffic Flow Template) is associated with Dedicated bearer, while Default bearer may or may not have TFT. TFT defines rules based on the source or destination address or protocol used, so that the UE and the network know which IP packets to send over the individual Dedicated bearer.

L-EBI (Linked EPS bearer ID). Each dedicated bearer is always connected to one of the default bearers and L-EBI notifies the Dedicated bearer to which default bearer is connected.

In LTE networks for differentiation of QoS same as in WiMAX are applicable classes which here are called QoS Class of Identifier (QCI). They define the basic characteristics of the IP packet level, as presented in Table 2.

Table 2. QCI classes in LTE

Then in the cell is applied a preemption algorithm, which allows high priority requesting bearers to displace low priority connected bearers in order to reduce the cell load. This algorithm coupled with a priority-based admission control can achieve low dropping and blocking probabilities.

3 Proposed Algorithm for Prioritization of UEs in the LTE

The Functions for management of QoS in access networks are responsible for the efficient allocation of resources in a wireless interface. They are generally defined as the control algorithms of radio resources (Radio Resource Management) and incorporate power management (Power Control), control of the transfer connection (Handover Control), access control (Admission Control), managing load (Load Control) and the management packet (PS), but directly related to QoS level cell are the last three. They are used to ensure a maximum throughput for individual services. The aim is to achieve keeping the network throughput as high as possible at a small price of only a bit more handovers.

LTE uses multiple access technology (OFDMA) and the total bandwidth is divided into Resource Blocks (RBs) in the frequency domain. The Data is transmitted in the Transport Blocks (TB) in one transmission time interval (TTI) for 1 ms. Each RB consists from 12 subcarriers (each of them is 15 kHz). The frame is 10 ms and divides into 10 equal subframes. Each subframe contains 2 slots * 0.5 ms. Each RB is related to one slot in time. One TB is related to 1 subframe and it is the minimum unit to schedule. The serve rule is to find first space that can fit the TB. If there are not enough RBs in the current TTI, the scheduler tries to find resources in the next TTI. This strategy minimizes the response latency, which is the best practice for delay sensitive traffic.

In wireless radio networks, the base station should allow access of as many users as possible to increase revenue. On the other hand, the quality of service should be guaranteed in order to provide satisfactory service. The maximum number of users a base station can support is bound by the system bandwidth. Under the restriction of QoS, if the maximum bandwidth is achieved, new connection requests should be rejected.

Let capacity of cell is C. Then the load L of cell at time slot t is

$$ L = \sum\nolimits_{i = 1}^{n} {L_{i} \left( t \right)} $$
(1)

where

$$ L_{i} \left( t \right) = \frac{{b_{i}^{u} }}{{b_{i} \left( t \right)}} $$
(2)

and

$$ L \le C $$
(3)

If bearer j want to use the same cell in the same time slot t, this will be possible if

$$ L + L_{j} \left( t \right) \le C $$
(4)

If this condition is not executed, this means that the resources in the current TTI are not enough and scheduler must reorder resources in the whole TTI window and must allocate resources in reserved bandwidth. In this case a task is to find spaces big enough for the new request. If there are not enough resources in the current TTI, the scheduler tries to find resources in the next TTI. This strategy minimizes the response latency, and thus is useful for delay sensitive traffic.

But this procedure is not applicable for beacon transmissions (it is sent among devices each 100 ms), because of emergency information it conveys, therefore the reserved resource blocks exist to accommodate the temporary overload.

This means, that important factors for QoS are: the Modulation and Coding Scheme (MCS) which be used, the MAC Transport Block (TB) size, the allocation bitmap which identifies which RBs will contain the data transmitted by the eNB to each user, number of users and prioritization of users.

The drop ratio is defined as the number of the rejected beacons to the number of the accepted beacons. For non-prioritization scheme, the beacons are rejected due to the cell overload. New arrival beacons can only be accepted after some users move out the service region, resulting in load reduction. For prioritization scheme, the rejected beacons are the ones that are removed by the congestion control algorithm.

The present paper offers an algorithm for UEs service in the distribution of resources in the uplink of LTE network as composed of two modules - by a control mechanism for admission (admission control) and Scheduler. According to the network load, the admission control for the reception of orders manages the number of UEs, which can enter into the Scheduler, in order to avoid overloading the system with too many UEs. The Scheduler allocates RBs among UEs according to UEs needs. In order to actuate the control mechanism of the base station (eNodeB), UE has to be connected with it. For this purpose, when the UE is in the range of several base stations and it wants to start data transfer, follows the next steps:

  1. 1.

    UE initiates a search of the nearest of the base stations.

  2. 2.

    UE decodes the system information, which the base station sent to it (This is the base station to which UE will try to connect).

  3. 3.

    In case of successful synchronization with the base station, UE starts the process of authentication.

  4. 4.

    If the authentication is successful, UE sends a request to connect to the eNodeB:

    1. (a)

      If UE did not get the identifier from eNodeB, it does not receive any allocated resource for the transfer of user data and could only look the service channel. In this state UE falls and when it made connection to the eNodeB, it has received allocated resources (it sent/received RBs), but for some time periods it is inactive for data transmission.

    2. (b)

      If UE get the identifier from eNodeB, the base station is calculated which resource (number of RB) can allocate to it, based on prioritization. After that UE synchronizes with eNodeB for sending/receiving data. This decision is made from second module - Scheduler.

    3. (c)

      If UE received the identifier from eNodeB, but it lost synchronization with eNodeB due to temporary inactivity, it can only receives data, but not send (uplink connection is inactive).

After successful authentication of the UE with the base station the Scheduler starts, as its first module - the allocation control generally operates in the following steps:

  1. 1.

    It is determined the total number of UEs (BR), which will be serviced by Sheduler for this moment (t):

If in the previous time (t−1) are satisfied all UEs will be (5), otherwise (6).

$$ {\text{BR}}_{\text{t}} = {\text{BR}}_{{{\text{t}} - 1}} + 1 $$
(5)
$$ {\text{BR}}_{\text{t}} = {\text{BR}}_{{{\text{t}} - 1}} - 1 $$
(6)
  1. 2.

    It is determined whether a new UE may enter for treatment in the Scheduler: If it reaches the maximum number of UE to serve at one timeslot, the request from this UE is rejected, or the UE is included in the system and the Scheduler starts to serve it.

Resource allocation in the Scheduler is based on the priority, which is presented on the Fig. 1 (right).

Fig. 1.
figure 1

The architecture of the LTE simulator and traffic prioritization in the scheduler

4 Designed Simulation Framework of LTE Scheduler

In this approach a simulation environment was established for implementation and exploration of the proposed algorithm. Used software tool is Visual Basic 2010. The architecture of the simulator is presented on the Fig. 1 (left). The modules “Topology Maintenance” and “Topology Modification” are realized with classes “Form1” and “Form2”. The classes contain methods for adding parameters of eNodeB and related UEs. A database for storing data from individual experiments for each eNodeB and its connected UEs is created. The tables from database with parameters for eNodeB and related UEs are presented on the Fig. 2 (left). The module “Traffic Management” is realized with class “Form3”. It loads data into the ‘UserEquipment’ table and visualizes the chart of the timing diagram.

Fig. 2.
figure 2

Database of LTE prioritization in the eNodeB (left) and resource allocation and transmission matrix (right)

The module “resource Allocation Generator”, based on class “eNodeBdata”, realizes the proposed algorithm for priority. The class contains methods for sorting UEs, adding it’s data in array and arranging them. On Fig. 2 (right) is presented one example of resource allocation and transmission matrix. The data from different experiments are send in .xls format to the next estimation.

5 Experimental Estimation of the Algorithm and Results

In this approach it is used Rapid Miner 6.0 to create a model of influence of multi-factors on QoS parameters in LTE network such as real throughput and drop ratio. The test data is obtained according to the values of TB size reported in [1], considering an equal distribution of the physical resource blocked among the users using Resource Allocation Type 0 as defined in Sect. 7.1.6.1 of [1]. Estimated data is obtained from the presented simulation framework. The acceptable relative tolerance (standard deviation) is

$$ \sigma = 0.05 $$
(7)

This tolerance is needed to take an account for the transient behavior at the beginning of the simulation. The main part of DataSet is presented in the Table 3.

Table 3. Data for model of QoS in LTE network

The input DataSet is partitioned into 10 subsets of equal size. Of the 10 subsets, a single subset is retained as the testing DataSet, and the remaining 9 subsets are used as training data set. The cross-validation process is then repeated 10 times, with each of the 10 subsets used exactly once as the testing data. Then results can be averaged to produce a single estimation. The learning processes usually optimize the model to make it fit the training data as well as possible. The Cross-Validation operator predicts the fit of a model to a testing data. This can be especially useful when separate testing data is not present. The model is presented on the Fig. 3.

Fig. 3.
figure 3

Validation of the model of QoS in the LTE network

The Fig. 4 presented throughput in Mbps with different MCS- 22 (left) and 12 respectively. In each case when number of users grows up, the throughput decreases. When comparing values among these figures, it is possible to see that for the equal number of users when MCS is more, throughput is more too. When the MCS decreases, the scatter of measured data for throughput increases.

Fig. 4.
figure 4

Throughput with MCS = 22 (left) and MCS = 12 (right)

On Fig. 5 is presented the Drop ratio. It is possible to compare the number of drops when it is applied with prioritization, based on distance between the user and Base station and drops without prioritization. When the number of users grows up, the numbers of rejected beacons grow up too. This is the reason to increase the drop ratio when the number of the users is increased. The observed parameters degradation when connecting more users is related to the priority implemented scheduler, in which case the less priority queues may not be served in the case of network overload or congestion.

Fig. 5.
figure 5

Drop Ratio

In assessing the performance of admission control of Scheduler it also is assumed that UEs are evenly distributed within the range of the cell according to Sect. 7.1.6.1 of [1]. Each UE sends a signal periodically to know the channel condition of the UE for each TTI period of 1 ms. In the Fig. 6 (left) is presented the dependence of the average time to establish a connection from the UE to eNodeB depending on the number of the UE, which are within the scope of the eNodeB for three different values of intensity of applications - 10%, 50% and 100%. It is easy to see a trend of increase in the time for establishing connection with an increase in the number of UE within the eNodeB, such as at a higher intensity time is significantly larger. If number of UEs is 200 and 100% intensity is reached 92 ms, while number of UEs is 200 and 10% intensity the average time is 3 times less - 32 ms.

Fig. 6.
figure 6

Average time for connection establishment with requests intensity of 10%, 50% and 100% (left) and Average time for connection establishment with 100 UEs (right)

Figure 6 (right) shows the dependence of the average time to establish a connection from UE to the eNodeB at different average intensity of applications when under area of the eNodeB are 100 UEs. An increase of the time for establishing a connection from the UE to eNodeB when the intensity of the requests is increased, as for the first 10% of the increase is from 10 ms to 17 ms, then the increase is significantly smoother and it is amended in the range of 18 ms to 23 ms.

The results give reason to conclude that the presented algorithm for the Scheduler for LTE network can be applied successfully in the number of UEs under 100, because regardless of the intensity of the requests of the active UEs the average connection time is under 25 ms - time, fully satisfying the requirements of [1].

6 Conclusion

The main goal of researchers is creating a smart network which is flexible, robust and cost effective. Due to this reason QoS stays in focus in each network- wired, wireless or hybrid. In this paper is proposed an algorithm for Scheduler to prioritize users in order to fit the bandwidth requirement, while satisfying the application needs, based on analytical data for many factors as MCS which be used, the TB size, the number of users and the prioritization of users. In this paper is proposed a simulation framework for LTE technology, which realized an efficient method for QoS for LTE service classes. Simulation’s results show that the proposed mechanism improves QoS, but the observed parameters degrade when the use of more subscribers is related to the priority implemented Scheduler, in which less priority queues may not be served in the case of network overload or congestion. There are presented two QoS parameters – throughput and drop ratio with prioritization and without prioritization. It was always assured a minimum transmission for all the service classes, although with different performances due to prioritization.