Abstract
Currently, the number of mobile devices is growing exponentially. To cope with the demand, a highly efficient network is required. This rising need for high-speed mobile data rates of up to 1 Tbps might well be satisfied by the sixth generation of mobile networks. It is anticipated that the 6G network would feature a sub-terahertz band and be able to achieve speeds of at least 100 Gbps. A significant amount of resources are required due to the rapid expansion of IoT and other applications. 6G wireless networks can give worldwide coverage from the air to the sea, ground to space. Included in the new model is artificial intelligence with capable security. Dynamic resource allocation is essential to support the exponential growth of data traffic caused by holographic movies, AR/VR, and online gaming. This paper focuses on various resource allocation methodologies and algorithms using deep learning techniques like CNN, DNN, Q learning, deep Q learning, reinforcement learning, actor critic, etc. briefly. Optimal allocation of resources dynamically in real time can improve overall system performance. Consideration is given to computing, radio, power, network, and communication resources. To establish a solid theoretical foundation for the resource allocation in 6G wireless networks, several deep learning techniques and approaches have been examined. The key performance indicators such as efficiency, latency, resource hit rate, decision delay, channel capacity, throughput are discussed.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
End-to-end latency, data throughput, energy efficiency, dependability, spectrum utilization, and coverage have changed from 1G to 5G networks. ITU defines 5G networks as improved mobile broadband (eMBB), massive machine type communication (mMTC), and ultra-reliable and low latency communication (URLLC) [1,2,3] 5G won’t match the demands of the 2030 technologies. 6G wireless networks will deliver worldwide coverage, intelligence, security, and increased spectrum/energy/cost efficiency [7, 8]. 5G may have trouble supporting large-scale heterogeneous devices. Most 5G networks save offline calculations on a server. 6G can meet real-time resource acquisition during job execution to improve network performance [13]. Storage and computational resources can be placed at the mobile edge for delay-sensitive and battery-limited devices. For uncertain situations, online CNN-based algorithms have been suggested [17]. DRL solves continuous and discrete actions. For discrete offloading choices, new actor-critic models were created [23]. DQN model [29] optimizes resource allocation and offline offloading (Table 1; Fig. 1).
Organization of Paper
This paper discusses the vision and technical objectives of 6G wireless networks. Authors have emphasized on how optimally and dynamically various resources like computing, communication, networking, storage, bandwidth can be allocated to the requesting users/devices using various deep learning algorithms. Discussion on various measuring parameters is done like throughput efficiency system capacity decision delay, which defines the system performance. Appendix gives the summary of algorithms with resources used, type of devises used, cost and complexity analysis. Also, future research directions are listed.
1.1 6G Vision
6G will provide terrestrial as well as non-terrestrial communication. 6G wireless networks cover all frequencies like Sub-6 GHz, Tera Hz and optical spectrum which will support increase in the data rates as well as it can support dense environment of devices [1]. 6G wireless networks intended to tremendously diverse, dynamic with high quality of service (QoS), with complex architecture. Unique and basic solution to this is artificial intelligence, specifically machine learning and deep learning, is upcoming solution to form a compete intelligent framework emerging as a fundamental solution to realize fully intelligent network management and organization [4,5,6].
6G will be powerful force for and rugged need for the forthcoming IoT enabled applications to overcome the 5G constraints, 6G, as growing generation, will be based on 5G. 6G will revolve human life and society which transforms human life as well as society. Figure 2. Shows the vision of 6G wireless networks and technical objectives for 6G [1, 15, 18, 26]
1.2 Technical Objectives of 6G
Detailed technical objectives are listed below (Fig. 2; Table 2).
2 Resource Allocation for 6G Wireless Networks
As applications diversify, dynamic real-time resource allocation is needed. Variety of algorithms and methodologies have been devised and assessed based on the need for networking and computational resources, research gaps, and to provide a backbone for resource allocation challenges in 6G wireless networks [16].
3C resources include physical (computing, wireless access, storage) and logical (subset of physical) resources [12]. Real-time resource allocation in dynamic tasks.
Using 6G wireless network features like low latency and high speed, fair resource allocation may enhance network performance by assigning resources dynamically in real time using deep learning algorithms. Dynamic resource assignments will improve usage. So, it's overworked. AI is 6G and beyond [7,8,9, 12]. Some common AI techniques will be used for the Resource Allocation for 6G Wireless Networks are listed below [2, 10, 11] (Table 3).
3 Summary of Deep Learning Algorithms Used for 6G Wireless Networks Resource Allocation
Yang, Helin, et al. [11] demonstrate AI-powered 6G network location and management. Handover, spectrum, mobility, and edge computing were considered. Offline training, residual networks, and feature matching graphics processing were also investigated. CPU/storage. Parallel plans are made. Lin, Mengting, and Youping Zhao [12] address AI resource management strategies. Authors discussed about radio resources as well as computing and cashing resources. They reviewed 6G wireless network issues and prospects. Deep Q-learning, deep double Q-learning, and their types are studied for resource management.
Lin, Kai, et al [17] proposed a resource-allocating algorithm for 6G-enabled massive IoT. Examining task change's impact. Authors employed a backtracking dynamic nested neural network. Stable system with faster decision-making. They cited Hu, Shisheng, et al. [19] suggested Deep Reinforcement Learning using block chain for dynamic resource sharing. Authors reduced blockchain overheads and simplified AI data gathering. Further studies are needed to reduce computational complexity while using private and public block chains.
Mukherjee, Amrit, et al. [20] proposed algorithm based on convolutional neural network (CCN) with back propagation. They analysed the allocation of resources to the discrete nodes in cluster. Wastage of resources due to redundant data reduced, improvement in the overall efficiency of network shown in the simulation. Networking and computational resources are used.
Guan, Wanqing, et al. [21] derived a deep reinforcement learning (DRL) technique that enabled service-oriented resource allocation employing several logical networks in network infrastructure that offered AI-customized slicing. Authors employed computational resources to evaluate service quality for resource allocation. Fast E2E slicing based on real-time user demand prediction was a future goal.
Kibria, Mirza Golam, et al. [22] discussed about efficient operation, optimization and control using AI and ML. Authors mentioned system can be made smart, systematic and intelligent by using big data analytics, resources mentioned were networking. They also discussed advantages and difficulties of using big data analytics. They mentioned that the processing, managing and leveraging massive amount of data is difficult and it is complex.
Liu, Kai-Hsiang, and Wanjiun Liao [23] DRL was used to deal with time-varying user requests. Energy consumption and enduring delay in multi user system is focused. Which ensured good service for tasks uploads. They did optimization jointly of computational and radio resources.
Yongshuai, Jiaxin, Xin Liu. [24] Computing, storage, and network resources were allocated using limited MDP and reinforcement learning. The authors introduced instantaneous and cumulative network slicing limits using reinforcement learning. Their strategy reduces constraint costs, they said.
Sami, Hani, et al. [25] introduced Deep Reinforcement Learning (DRL) and Markov Decision Process for allocating computing resources in dynamically changing service demands. IScaler is a revolutionary IoE resource scaling and service placement solution. It used DRL and MDP to estimate resource placement and intelligent scaling. Google Cluster traces datasets to provide simulation resources.
Bhattacharya, Pronaya, et al. [29] proposed dynamic resource allocation to solve spectrum allocation difficulties. Block chain is used to model 6G DQN-based dynamic spectrum allocation. Q learning and DQN algorithms are used to simulate system performance. Block chain improves spectrum allocation fairness by 13.57% compared to non-DQN solutions.
Li, Meng, et al. [30] propose using blockchain technology to overcome restricted computing resources and non-intelligent resource management. Authors suggest novel reinforcement learning to improve resource allocation and reduce waste. Markov decision procedure forms cloud edge collaborative resource allocation.
Waqar, Noor, et al. [31] built network access and infrastructure. The author presents a time-varying dynamic system model for HAPs with MEC servers. Decentralizing reinforcement learning-based value iteration reduces computation and communication overhead. I vehicles as intelligent agents are assessed using Q learning, deep Q learning, and double deep Q learning in terms of competency, complexity, cost, and size.
Ganewattha, Chanaka et al. [32] used deep learning to allocate wireless resources in shared spectrum bands for reliable channel forecasting. Encoder-decoder-based Bayesian models are used to model wireless channel uncertainty. University of Oulu provided channel usage and fake data. The RA technique reaches Nash equilibrium under 2N access points. The channel allocation process converges quickly, enhancing network Sam rates.
Alwarafy et al. [33] discussed the 6G network scalability and heterogeneity problems in paper. Deep reinforcement learning technique is used to solve resource allocation problem. Dynamic power allocation and multi-RAT assignment in 6G HET Nets are addressed by the suggested solution.
Gong, Yongkang, et al. [9] presented deep reinforcement learning for industrial IoT systems to allocate resources and schedule tasks. Author emphasized energy use and delay. Loading is distinguished by a new isotone action generating technique and an adaptive action updating strategy. Convex optimization solves time-varying resource allocation problems. Gain rate, batch size, RHC intervals, and training steps measure system performance.
Kasgari, Ali Taleb Zadeh, et al. [34] presented free resource allocation for the downlink wireless network (uR LLC) 6G. Under specified data limitations, achieve end-to-end high reliability and low latency. A GAN-based model enables deep reinforcement learning. To capture network conditions and run reliable systems. Proposed resource allocation model leverages multi-user OFDM (OFDMA). Deep reinforcement learning network is fed rate constraints and latency to minimize power while maintaining reliability. The proposed model reduces transition training time, according to simulations.
Sanghvi, Jainam, et al. [35] proposed edge intelligent model uses micro base station units, which are used for resource allocation MBS guarantees increased channel gain and decreased energy loss. Deep reinforcement enabled edge AI scheme supports responsive edge cache and better learning. Proposed scheme is compared with 5Gfor throughput and latency.
Xiao, Da, et al. [36] author proposed deep Q network, DQN to maximize acceptance ratio and high priority placement for uRLLC request first, that was slicing as MDP characterized. Reward function based on service prioritization defined. MDP selected for easy action in DQN, once trained DQN approximate ideal solution.
Authors have proposed in previous research work [37] a deep learning network to optimize resource allocation to base stations in a 6G communications network. The 6G network was simulated using standard 6G parameter values. On the MATLAB software, a neural network called Base Station Optimization Network (BSOnet) was created, and a dataset with varying parameter values was fed to it for training. When this network was deployed in the simulated 6G network, it consumed less power. This network is a step toward optimizing the developing 6G networks, and authors have hoped that this study will provide the scientific community with a path to further research in this area. Table 4. Shows Summary of Deep Learning Algorithms used for Resource Allocation for 6G Wireless Networks (Fig. 3).
4 Conclusion and Future Scope
This study discusses vision, trends, and the convergence of artificial intelligence for the 6G wireless networks. The authors have informed of numerous AI algorithms that will be employed for resource allocation. A summary of several deep learning-based algorithms for resource optimization in support of varied services, emphasising certain research paths and possible solutions for 6G wireless networks. Other topics to be studied include choosing between several deep learning algorithms for different application scenarios and designing techniques to lower computing costs. Deep learning algorithms’ powerful learning and reasoning abilities can improve network performance. Deep learning techniques are being designed to improve accuracy and computing efficiency. 6G networks will require effective resource allocation to provide diversified services and huge connections. Collaborations between hardware and deep learning algorithms may be possible.
References
You, X., et al.: Towards 6G wireless communication networks: vision, enabling technologies, and new paradigm shifts. Sci. China Inform. Sci. 64(1), 1–74 (2021)
Dogra, A., Jha. R.K., Jain, S.: A survey on beyond 5G network with the advent of 6G: architecture and emerging technologies. IEEE Access 9, 67512–67547 (2020)
Guo, W.: Explainable artificial intelligence for 6G: improving trust between human and machine. IEEE Commun. Mag. 58(6), 39–45 (2020)
Shafin, R., Liu, L., Chandrasekhar, V., Chen, H., Reed, J., Zhang, J.C.: Artificial intelligence-enabled cellular networks: a critical path to beyond-5G and 6G. IEEE Wirel. Commun. 27(2), 212–217 (2020)
Kato, N., Mao, B., Tang, F., Kawamoto, Y., Liu, J.: Ten challenges inadvancing machine learning technologies toward 6G. IEEE Wirel. Commun. 27(3), 96–103 (2020)
Sheth, K., Patel, K., Shah, H., Tanwar, S., Gupta, R., Kumar, N.: A taxonomy of AI techniques for 6G communication networks. Comput. Commun. 161, 279–303 (2020)
Jiang, W., Han, B., Habibi, M.A., Schotten, H.D.: The road towards 6G: a comprehensive survey. IEEE Open J. Commun. Soc. 2, 334–366 (2021)
Matinmikko-Blue, M., Yrjölä, S., Ahokangas, P.: Spectrum management in the 6G era: the role of regulation and spectrum sharing. In: 2020 2nd 6G Wireless Summit (6G SUMMIT), pp. 1–5. IEEE (2020)
Gong, Y., Yao, H., Wang, J., Li, M., Guo, S.: Edge intelligence-driven joint offloading and resource allocation for future 6G industrial internet of things. IEEE Trans. Netw. Sci. Eng. (2022)
Zafar, S., Jangsher, S., Al-Dweik, A.: Resource allocation using deep learning in mobile small cell networks. IEEE Trans. Green Commun. Netw. (2022)
Yang, H., Alphones, A., Xiong, Z., Niyato, D., Zhao, J., Kaishun, W.: Artificial-intelligence-enabled intelligent 6G networks. IEEE Netw. 34(6), 272–280 (2020)
Lin, M., Zhao, Y.: Artificial intelligence-empowered resource management for future wireless communications: a survey. China Commun. 17(3), 58–77 (2020)
Du, J., Jiang, C., Wang, J., Ren, Y., Debbah, M.: Machine learning for 6G wireless networks: carrying forward enhanced bandwidth, massive access, and ultrareliable/low-latency service. IEEE Veh. Technol. Mag. 15(4), 122–134 (2020)
Tang, F., Mao, B., Kawamoto, Y., Kato, N.: Survey on machine learning for intelligent end-to-end communication toward 6G: from network access, routing to traffic control and streaming adaption. IEEE Commun. Surv. Tutor. 23(3), 1578–1598 (2021)
Guo, F., Yu, F.R., Zhang, H., Li, X., Ji, H., Leung, V.C.M.: Enabling massive IoT toward 6G: a comprehensive survey. IEEE Internet Things J. 8(15), 11891–11915 (2021)
Jayakumar, S., S, N.: A review on resource allocation techniques in D2D communication for 5G and B5G technology. Peer-to-Peer Netw. Appl. 14(1), 243–269 (2020). https://doi.org/10.1007/s12083-020-00962-x
Lin, K., Li, Y., Zhang, Q., Fortino, G.: AI-driven collaborative resource allocation for task execution in 6G-enabled massive IoT. IEEE Internet Things J. 8(7), 5264–5273 (2021)
Xu, X., et al.: Dynamic resource allocation for load balancing in fog environment. Wirel. Commun. Mob. Comput. 2018 (2018)
Hu, S., Liang, Y.-C., Xiong, Z., Niyato, D.: Blockchain and artificial intelligence for dynamic resource sharing in 6G and beyond. IEEE Wirel. Commun. 28(4), 145–151 (2021)
Mukherjee, A., Goswami, P., Khan, M.A., Manman, L., Yang, L., Pillai, P.: Energy-efficient resource allocation strategy in massive IoT for industrial 6G applications. IEEE Internet Things J. 8(7), 5194–5201 (2020)
Guan, W., Wen, X., Wang, L., Zhaoming, L., Shen, Y.: A service-oriented deployment policy of end-to-end network slicing based on complex network theory. IEEE Access 6, 19691–19701 (2018)
Kibria, M.G., Nguyen, K., Villardi, G.P., Zhao, O., Ishizu, K., Kojima, F.: Big data analytics, machine learning, and artificial intelligence in next-generation wireless networks. IEEE Access. 6, 32328–32338 (2018)
Liu, K.-H., Liao, W.: Intelligent offloading for multi-access edge computing: a new actor-critic approach. In: ICC 2020–2020 IEEE International Conference on Communications (ICC), pp. 1–6. IEEE (2020)
Liu, Y., Ding, J., Liu, X.: Resource allocation method for network slicing using constrained reinforcement learning. In: 2021 IFIP Networking Conference (IFIP Networking), pp. 1–3. IEEE (2021)
Sami, H., Otrok, H., Bentahar, J., Mourad, A.: AI-based resource provisioning of IoE services in 6G: a deep reinforcement learning approach. IEEE Trans. Netw. Serv. Manage. 18(3), 3527–3540 (2021)
Alsharif, M.H., Kelechi, A.H., Albreem, M.A., Chaudhry, S.A., Zia, M.S., Kim, S.: Sixth generation (6G) wireless networks: vision, research activities, challenges and potential solutions. Symmetry. 12(4), 676 (2020)
Bernardos, C.J., Uusitalo, M.A.: European vision for the 6G network ecosystem. In: Zenodo, Honolulu, HI, USA, Tech. Rep. (2021)
Zhang, S., Zhu, D.: Towards artificial intelligence enabled 6G: state of the art, challenges, and opportunities. Comput. Netw. 183, 107556 (2020)
Bhattacharya, P., et al.: A deep-Q learning scheme for secure spectrum allocation and resource management in 6G environment. IEEE Trans. Netw. Serv. Manag. (2022). https://doi.org/10.1109/TNSM.2022.3186725
Li, M., et al.: Cloud-edge collaborative resource allocation for blockchain-enabled internet of things: a collective reinforcement learning approach. IEEE Internet Things J. 9(22), 23115–23129 (2022)
Waqar, N., Hassan, S.A., Mahmood, A., Dev, K., Do, D.-T., Gidlund, M.: Computation offloading and resource allocation in MEC-enabled integrated aerial-terrestrial vehicular networks: a reinforcement learning approach. IEEE Trans. Intell. Trans. Syst. 23(11), 21478–21491 (2022)
Ganewattha, C., Khan, Z., Latva-Aho, M., Lehtomäki, J.J.: Confidence aware deep learning driven wireless resource allocation in shared spectrum bands. IEEE Access 10, 34945–34959 (2022)
Alwarafy, A., Albaseer, A., Ciftler, B.S., Abdallah, M., Al-Fuqaha, A.: AI-based radio resource allocation in support of the massive heterogeneity of 6G networks. In: 2021 IEEE 4th 5G World Forum (5GWF), pp. 464–469. IEEE (2021)
Kasgari, A.T.Z., Saad, W., Mozaffari, M., Vincent Poor, H.: Experienced deep reinforcement learning with generative adversarial networks (GANs) for model-free ultra-reliable low latency communication. IEEE Trans. Commun. 69(2), 884–899 (2020)
Sanghvi, J., Bhattacharya, P., Tanwar, S., Gupta, R., Kumar, N., Guizani, M.: Res6Edge: an edge-AI enabled resource sharing scheme for C-V2X communications towards 6G. In: 2021 International Wireless Communications and Mobile Computing (IWCMC), pp. 149–154. IEEE (2021)
Xiao, D., Ni, W., Andrew Zhang, J., Liu, R., Chen, S., Qu, Y.: AI-enabled automated and closed-loop optimization algorithms for delay-aware network. In: 2022 IEEE Wireless Communications and Networking Conference (WCNC), pp. 806–811. IEEE (2022)
Kamble, P., Dr Shaikh.: Optimization of base station for 6G wireless networks for efficient resource allocation using deep learning. In: Optimization of Base Station for 6G Wireless Networks for Efficient Resource Allocation using Deep Learning (April 9, 2022) (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kamble, P., Shaikh, A.N. (2022). Evolution Towards 6G Wireless Networks: A Resource Allocation Perspective with Deep Learning Approach - A Review. In: Rajagopal, S., Faruki, P., Popat, K. (eds) Advancements in Smart Computing and Information Security. ASCIS 2022. Communications in Computer and Information Science, vol 1759. Springer, Cham. https://doi.org/10.1007/978-3-031-23092-9_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-23092-9_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-23091-2
Online ISBN: 978-3-031-23092-9
eBook Packages: Computer ScienceComputer Science (R0)