1 Introduction

A future requirement for building smart societies utilizes technologies such as large scale sensor networks [1] and IoT [2], are continuously being evolved. The growth of self-governing smart robots and vehicle has enabled wide attention across the various area for enhancing the data commination and vehicle safety. The smart self-governing vehicle such as Google Car [3] is being constructed using cognitive (observation) architecture/framework that includes various information collected from different onboard sensor and using artificial intelligence (AI) such machine learning (ML) and deep learning (DL) model for smart manoeuvring on road with other vehicles. Nonetheless, the smartness of autonomous vehicle can be additionally improved by the usage of effective computing and network capability of the smart transport system. The vehicle driving safety majorly depends on low-latency and highly reliable wireless communication environment for efficient transmission of control packets (CP) due to the constraint of on-board sensors [4]. Further, for avoiding high management cost and achieve completely self-sustainable low-cost ubiquitous systems for the IoT and smart cities, research communities have devoted a considerable interest in ambient energy-saving technologies.

The Internet of Things (IoT) paradigm envisions a wide infrastructure network of “things” that forms a pervasive computing environment [5]. It is defined as a global network with an infrastructure that has self-configuring capabilities [6]. The IoT is an intelligent network that connects billions of things via the Internet by using a variety of communications technologies, such as conventional long term evolution (LTE), Wi-Fi, ZigBee, wireless sensor networks (WSNs), Ethernet, as well as specially developed Internet Protocol Version 6 (IPv6) over low-power wireless personal area networks (6LoWPAN), the low-power wide-area network from the LoRa Alliance (LoRaWAN), LTE machine type communications (LTE-MTC), narrowband IoT (NB-IoT), mmWave and many other communications technologies. Therefore, the IoT is rapidly transforming into a highly heterogeneous ecosystem that provides interoperability among different types of devices and heterogeneous communications technologies as shown in Fig. 1. Many interconnected objects will be able to sense physical phenomena and exchange data, information and knowledge through the network, to leverage the user experience of the surrounding environment. Energy efficiency is a key aspect for these IoT battery-powered devices, which feature sensing, communication and processing capabilities. As most of the information in IoT application is transferred through edge network that connects different RATs; energy-efficient handover mechanism is required for catering these low-power devices.

Fig. 1
figure 1

The architecture of Heterogeneous wireless communication network

This paper specifically focuses on handover (HO) operation of IoT device from LTE to mm-Wave (i.e., 5th generation (5G)) technologies [7]. The performance metrics here are achieving high throughput, very low latency, high-quality streaming, good availability, robust connectivity, and low energy dissipation [8, 9]. For meeting above requirement new methodologies have been modelled [9]. First, the network density will result in increased reutilization spatial mmWave frequencies [10]. This will create added challenges for handover (i.e., mobility) management [11]. Second, the data will be much closer to the user for reducing latency with the adoption of the Mobile Edge Cloud (MEC) [9]. Third, automatic decision making using Software Defined Networking (SDN) and ML methodologies have resulted in increasing the complexity of the handover operation of a heterogeneous cellular network. The usage of artificial intelligence (AI) methodologies for carrying out automatic decision making in the cellular network have been modelled in recent time for optimizing handover operation, resource allocation [12], energy-efficient RAT selection [13]. Further, a huge amount of data is being collected for analyzing the characteristics of heterogeneous wireless networks [14].

Recently several handover mechanisms have been presented which requires accurate measurement [16]. However, not every parameter can be modelled accurately in HWNs and requires manual optimization procedure and constraining/limiting the usage of these methods for practical application usages [15]. Further, the majority of the existing handover execution method [16,17,18,19] are designed considering meeting QoS parameter of network criteria. Very limited work is carried out for RAT selection considering user preference under HWN [20, 21], and [22] and handover execution does not bring good tradeoffs between energy minimization and handover efficiency (success rate). In [23] presented a KNN-based handover execution model for 5G cellular network with good target discovery performance and showed the benefit of using the ML technique for decision making. However, the model is designed considering a homogeneous network. However, the research still lacks how to successfully employ ML methodologies into a heterogeneous cellular network [24]. For overcoming research challenges this paper presents a handover execution algorithm for a heterogeneous cellular network using a machine learning algorithm. Further, this paper presented an improved XGBoost classification algorithm for addressing data imbalance issues affecting prediction accuracy. The proposed XGBoost-based handover execution algorithm reduces the signalling overhead of heterogeneous cellular networks.

The contribution of work is as follows:

  • Presented efficient handover execution technique using a machine learning technique.

  • Modelled improved XGBoost algorithm for carrying out handover operation even with the presence of imbalance feature.

  • Achieve much better handover success rate probability, reduces handover failures, reduce handover execution energy and energy overhead.

The paper is organized as follow. Section 2 present efficient radio access technology selection algorithm. The result and analysis are described in Sect. 3 and last section conclusion of research work and future work for enhancing handover performance is described.

2 An efficient radio access technology selection method using machine learning algorithm

This section present efficient radio access technology selection (i.e., handover) mechanism using a machine learning algorithm. First, the system model for carrying out a handover operation is presented. Second, discusses the machine learning model used for carrying out handover execution. Third discusses standard and proposed handover execution model. Finally, discusses the improvement of XGBoost algorithm for enhancing handover execution performance.

2.1 Sysem and radio model

Let consider a dense heterogeneous wireless communication network (HWCN) where the LTE and mmWave (5G) network overlap each other with radius S. The IoT device is placed randomly in HWCN environment that follows Poisson distribution [25] and is mobile. Both LTE and mmWave (5G) operate with different frequency bands and technologies. The IoT device connected with any current cellular network will report the measured downlink radio frequencies to the respective base stations (BSs). However, the major difference here is that the BS can decide to reconfigure measurement gap in LTE; this is because usage of machine learning models will aid models in knowing mmWave band might not possess feasible signal power for maintaining present ongoing communication. Further, once the IoT device reaches the edge of the network, the IoT device must handover to other radio access technology (RAT) for continuing the services. The expected number of IoT device that can be handled per unit area is described by process \(\mu\) and intensity parameter \(\varphi\). This wok describes the process \(\mu\) for respective IoT devices O in cellular communication environment X. The IoT devices information are obtained through Poisson distribution with mean \(\varphi X\) described using the following equation

$$\varphi X = \varphi \pi s^{2}$$
(1)

where S depicts a cellular network radius. The jth IoT device location is obtained through continuous uniform distribution in \({\mathbb{S}}^{2}\) applying polar coordinates \(\left( {s_{j} ,\theta_{j} } \right)\), where \(0 \le s_{j} \le s\), \(0 \le \theta_{j} \le 2\pi\) and \(j = 1,2,3, \ldots ,O\).

The improved extreme gradient boosting (IXGBoost) classification model is modelled for overriding handover decision making using historical data of handover success probability of respective IoT device. For machine learning models to be applied, the collection session U cannot surpass the channel coherence time.

For the model to be valid, the session U cannot exceed the channel coherence time. Further, the sample size collected cannot surpass total attempted handovers as not every IoT device needed handover operation. The LTE network executed an HO execution algorithm for respective IoT device every instance a new IoT device enters an LTE network or handoff to a new LTE network.

2.2 Machine algorithm for handover generation

The existing model has used the KNN machine learning model for carrying out handover operation dense heterogeneous wireless communication environment with good efficiency. However, processing them in parallel fashion is difficult and when there exist data imbalance the accuracies of their handover execution model will be impacted significantly. Thus, affecting the user quality of experience. Thus, this paper uses XGBoost classification algorithm for predicting handover execution because the individual tree can be executed in a parallel manner adopting parallel computing platforms. Thus, can be adopted for the real-time requirement of HWNs under distributed BSs. The model can be scaled up concerning input and can learn high correlation among feature set. More detail of XGBoost which is an ensemble learning classifier can be obtained from section D. The model minimizes the objective parameter with differentiable regularization term and convex loss function, \(\beta \| x\|_{1} + \frac{1}{2}\varphi \|x\|_{2}^{2} + \varphi^{U}\) where U depict leaves size and x is a vector composed of leafs weights of respective gradient boosted tree. Usage of regularization term aid in avoiding overfitting problem and optimize the computational complexities. The \(n \times 0\) matrix of feature set to be learned is described using the following equation

$$Y = \left[ {Y_{j} } \right]_{j = 1}^{o}$$
(2)

where o depicts several feature set considered for learning and \(Y_{j}\) is a multi-dimension feature vector obtained for time instance. The supervised labelled vector is defined by y where 1 depicts handover is executed and 0 define handover is not executed. This paper considers total five feature such as coordinate of IoT device, the distance among IoT device and the BS, reference symbol received power (RSRP) in LTE and mmWave, RSRP measurement update based on X1, and RSRP measurement update based on X2. The last three-parameter can be collected directly from IoT devices and the first two feature are obtained through RRC (radio resource control) message using observed arrival time difference or global navigation system [26]. The hyperparameters are optimized using a grid search considering K-fold cross-validation for achieving improved handover execution accuracies. Using [10], the time complexity of our model can be described using the following equation,

$${\mathcal{C}} = O\left( {mn\left( {e_{ \uparrow } F + n\log n} \right)} \right)$$
(3)

where \(e_{ \uparrow }\) described boosted tree maximum depth, F described the overall size of trees. The computation complexities are measured concerning total IoT device size covered within a cellular network and measured conveyed frequency.

2.3 Handover mechanism

As described in LTE standard [27] the IoT device measure RSRP of LTE network which is lesser than handover quality specifier for initializing RRC X2. Then the LTE network reconfigures RRC based on measurement gap. Further, IoT device measure an RSRP higher than handover quality specifier, it initializes RRC X1. If mmWave power is higher than predefined quality specifier, IoT device initializes RRC Y2, randomly select a slot in mmWave channel for carrying out communication and HO is successfully carried out. The complete process of the standard handover execution process is shown in Fig. 1. From Fig. 3 is seen the HO attempt is done in phase A and handover execution is done at phase Y where LTE network decided to allow HO.

Using the traditional model will introduce slight overhead. Thus, this paper presents a machine learning-based handover execution algorithm. The algorithm of the proposed machine learning-based handover execution technique of HWNs is shown in Algorithm 1. The decision making to accept IoT device measurement or use machine learning-based handover success rate accuracies is done using Algorithm 1. The ROC curve is utilized for predicting HO will fail or succeed is computed considering tenfold cross-validation using improved XGBoost algorithm. Then, if LTE received measured power is lesser than standard handover quality specifier and forecasted received power of mmWave is higher than predefined handover quality specifier the HO algorithm will proceed with the standard execution process. Otherwise, if forecasted mmWave received power lesser when compared with handover quality specifier, the LTE network prevents the IoT device request of being HO from LTE to mmWave. Thus, aid IoT device in overcoming likely handover failures. Figure 2 shows the handover execution process using the proposed handover execution model using machine learning algorithm for HWNs.

Fig. 2
figure 2

Standard Handover execution algorithm for Heterogeneous wireless communication network

2.4 Improved XGBoost classifier model

The XGBoost classifier model utilizes additive learning mechanism considering the second-order derivative. The first- and second- derivative (i.e., sigmoid and hessian) loss function concerning prediction are needed for fitting the automatic handover execution methodologies. Let n describe sample data size considered, o depict feature size considered. The initial prediction before applying a sigmoid function is described by \(a_{j}\), and probability-based prediction are described by \(\hat{z}_{j} = \alpha \left( {a_{j} } \right)\), where \(\alpha ( \cdot )\) is utilized for defining the sigmoid function. Further, \(\beta\) and \(\varphi\) are used for describing two loss functions, respectively (Fig. 3).

Fig. 3
figure 3

Handover execution algorithm using a machine learning algorithm for Heterogeneous wireless communication network

As described in [27], the additive learning mechanism can be obtained using the following equation,

$$M^{\left( u \right)} = \mathop \sum \limits_{j - 1}^{o} m\left( {z_{j} ,a_{j}^{{\left( {u - 1} \right)}} + g_{u} \left( {y_{j} } \right)} \right) + \delta \left( {g_{u} } \right)$$
(4)

where u represent the iteration number for training automatic handover execution model. Then, by Taylor second-order expansion on Eq. (1) we can obtain the following equation

$$\begin{aligned} M^{\left( u \right)} & \approx \mathop \sum \limits_{j - 1}^{o} \left[ {m\left( {z_{j} ,a_{j}^{{\left( {u - 1} \right)}} } \right) + h_{j} g_{u} \left( {y_{j} } \right) + \frac{1}{2}i_{j} \left( {g_{j} \left( {y_{j} } \right)} \right)^{2} } \right] + \delta \left( {g_{u} } \right) \\ & \propto \mathop \sum \limits_{j - 1}^{o} \left[ {h_{j} g_{u} \left( {y_{j} } \right) + \frac{1}{2}i_{j} \left[ {g_{u} y_{j} } \right]^{2} } \right] + \delta \left( {g_{u} } \right), \\ \end{aligned}$$
(5)

where \(h_{j}\) depict gradient function which can be established using the following equation

$$h_{j} = \frac{{\partial {\mathcal{M}}}}{{\partial a_{j} }}.$$
(6)

And \(i_{j}\) describes hessian function which is established using the following equation

$$i_{j} = \frac{{\partial^{2} {\mathcal{M}}}}{{\partial a_{j}^{2} }}.$$
(7)

The function \(h_{j}\) and \(i_{j}\) are scalar parameter; this is because individual boosting tress are applied for solving binary problems. As XGBoost doesn’t offer automotive differential operation, this paper present manual derived derivatively. For the loss function, the sigmoid is chosen as an activation function as derivative as described in the below equation

$$\begin{aligned} \frac{{\partial \hat{z}}}{\partial a} & = \frac{\partial \alpha \left( a \right)}{{\partial a}} \\ & = \alpha \left( a \right)\left( {1 - \alpha \left( a \right)} \right) \\ & = \hat{z}\left( {1 - \hat{z}} \right). \\ \end{aligned}$$
(8)

The weighted loss function for handover decision making in a heterogeneous wireless network can be obtained using the following equation

$${\mathcal{M}}_{v} = - \mathop \sum \limits_{j = 1}^{n} \left( {\beta z_{j} \log \left( {\hat{z}_{j} } \right) + \left( {1 - z_{j} } \right)\log \left( {1 - \hat{z}_{j} } \right)} \right)$$
(9)

where \(\beta\) depicts the bias parameter that describes imbalance features. Automatically whenever \(\beta\) goes beyond 1, the additional loss parameter will be considered by classifying 1 as 0. Similarly, if \(\beta\) is lesser than 1, in such cases the loss function weight is optimized concerning data features with label 0 are classified correctly.

The first-order derivative is described below

$$\frac{{\partial {\mathcal{M}}_{x} }}{{\partial a_{j} }} = - \beta^{{z_{j} }} \left( {z_{j} - \hat{z}_{j} } \right).$$
(10)

The above equation is similar to \(\frac{{\partial {\mathcal{M}}}}{\partial a}\) a term used in computing general cross-entropy loss. A major difference here is that the parameter \(\beta^{{z_{j} }}\) is used for controlling the present parameter. The second derivative is obtained by again derivating concerning \(a_{j}\) concerning Eq. (8) as follows

$$\frac{{\partial {\mathcal{M}}_{x}^{2} }}{{\partial^{2} a_{j} }} = - \beta^{{z_{j} }} \left( {1 - \hat{z}_{j} } \right)\left( {\hat{z}_{j} } \right).$$
(11)

In the next section performance achieved by proposed improved XGBoost-based handover mechanism over existing KNN-based handover mechanism is presented.

3 Result and discussion

This section present experiment analysis of handover performance of proposed improved XGBoost-based handover execution model over existing KNN-based handover execution model. Both the model is implemented using Python framework for carrying out a simulation. The simulation parameter considered is described as follows. The LTE centre frequency is set to 2.1 GHz, 5G centre frequency is set to 28 GHz, LTE bandwidth is set to 20 MHz, 5G bandwidth is set to 100 MHz, COST 231 is used for modelling LTE propagation model, model presented [28] is used as 5G propagation model, simulation time is set to 50 ms, cell radius is set to 350 m, LTE BS power is set to 46 dBm, 5G BS power is set to 46 dBm, RRC even X1, X2, and Y2 is set to − 125 dBm, − 130 dBm, and − 95 dBm respectively. Here experiment is conducted to evaluate performance in terms of handover success rate, handover failures, energy overhead, and total energy consumption for carrying out handover operation by varying IoT device size.

3.1 Handover success rate performance considering varied IoT device

This section presents handover success rate performance achieved proposed improved XGBoost-based handover execution model over existing KNN-based handover execution model. Here experiment is conducted by varying the IoT device size and handover success rate obtained is noted and is graphically shown in Fig. 4. From Fig. 4 it can be seen the XGBoost-based handover execution model improves handover success rate by 0.3745%, 0.7006%, 1.1486%, and 0.8388% over KNN-based handover execution model when IoT device size is 50 m 100, 150, and 200, respectively. An average handover success rate performance improvement of 0.765% is achieved by XGBoost-based handover execution model over KNN-based handover execution model. From the graph it is seen as the number of IoT device size increases the handover success probability decreases for both existing and proposed handover execution algorithm model. From result achieved it can be stated that XGBoost based handover execution model achieves much superior handover success rate performance irrespective of IoT device size.

Fig. 4
figure 4

Handover success rate performance of XGBoost-based handover execution model and KNN-based handover execution model

3.2 Handover failure performance considering varied IoT device

This section presents handover failure performance achieved proposed improved XGBoost-based handover execution model over existing KNN-based handover execution model. Here experiment is conducted by varying the IoT device size and handover failure obtained is noted and is graphically shown in Fig. 5. From Fig. 5 it can be seen the XGBoost-based handover execution model reduces handover failures by 100%, 46.43%, 34.43%, and 15.625% over KNN-based handover execution model when IoT device size is 50, 100, 150, and 200, respectively. An average handover failure reduction of 49.12% is achieved by XGBoost-based handover execution model over KNN-based handover execution model. From the graph it is seen as the number of IoT device size increases the handover failure increases for both existing and proposed handover execution algorithm model. From result achieved it can be stated that XGBoost based handover execution model achieves much superior handover failure reduction irrespective of IoT device size.

Fig. 5
figure 5

Handover failure performance of XGBoost-based handover execution model and KNN-based handover execution model

3.3 Handover energy consumption performance considering varied IoT device

This section presents handover energy consumption performance achieved proposed improved XGBoost-based handover execution model over existing KNN-based handover execution model. Here experiment is conducted by varying the IoT device size and handover energy consumption is noted and is graphically shown in Fig. 6. From Fig. 6 it can be seen the XGBoost-based handover execution model reduces handover energy consumption by 0.373%, 0.6853%, 0.63%, and 0.763% over KNN-based handover execution model when IoT device size is 50, 100, 150, and 200, respectively. An average handover energy consumption reduction of 0.612% is achieved by XGBoost-based handover execution model over KNN-based handover execution model. From the graph it is seen as the number of IoT device size increases the handover energy consumption also increases for both existing and proposed handover execution algorithm model. From result achieved it can be stated that XGBoost based handover execution model achieves much superior handover energy consumption reduction irrespective of IoT device size.

Fig. 6
figure 6

Handover energy consumption performance of XGBoost-based handover execution model and KNN-based handover execution model

3.4 Handover energy overhead performance considering varied IoT device

This section presents handover energy overhead performance achieved proposed improved XGBoost-based handover execution model over existing KNN-based handover execution model. The energy overhead is measured for reconnecting IoT device of failed handovers. Here experiment is conducted by varying the IoT device size and handover energy consumption is noted and is graphically shown in Fig. 7. From Fig. 7 it can be seen the XGBoost-based handover execution model reduces handover energy overhead by 87.1795%, 55.88%, 40.3%, and 20.59%, over KNN-based handover execution model when IoT device size is 50, 100, 150, and 200, respectively. An average handover energy overhead reduction of 50.99% is achieved by XGBoost-based handover execution model over KNN-based handover execution model. From the graph it is seen as the number of IoT device size increases the handover energy overhead for both existing and proposed handover execution algorithm model. From result achieved it can be stated that XGBoost based handover execution model achieves much superior handover energy overhead reduction irrespective of IoT device size.

Fig. 7
figure 7

Handover energy overhead performance of XGBoost-based handover execution model and KNN-based handover execution model

4 Conclusion

This work first conducted a deep-rooted survey of various existing handover mechanism for heterogeneous network. From the survey, it saw an existing handover mechanism failed to bring good tradeoffs between performance requirement and energy reduction. Further, the existing model induces energy overhead because of additional signaling overhead. For reducing signaling overhead automatic handover mechanism adopting machine learning have been modelled. KNN-based handover mechanism failed to provide accurate prediction because the model could not handle feature imbalance issues. For addressing the research issues this paper presented a new handover mechanism adopting machine learning technique namely XGBoost. Further, for addressing feature imbalance an improved XGBoost algorithm is modelled in this paper. The proposed XGBoost-based handover mechanism achieves much better accuracies of handover success rate when compared with KNN-based Handover mechanism. The experiment is conducted by varying IoT device to evaluated handover performance. The result achieved shows XGBoost-based handover mechanism improves handover success rate by 0.765% when compared with KNN-based Handover mechanism. Further, XGBoost-based handover mechanism reduces handover failure by 49.12% when compared with KNN-based Handover mechanism. Then XGBoost-based handover mechanism reduces energy consumption and energy overhead by 0.612% and 50.99% over KNN-based Handover mechanism, respectively due to reduction of additional signaling overhead. The overall result achieved proposed model is robust and scalable irrespective of IoT device size operating in a heterogeneous network. Future work would consider performance evaluation and RAT selection considering diverse QoS constraint in a heterogeneous cellular network.