Keywords

1 Introduction

End-to-end latency, data throughput, energy efficiency, dependability, spectrum utilization, and coverage have changed from 1G to 5G networks. ITU defines 5G networks as improved mobile broadband (eMBB), massive machine type communication (mMTC), and ultra-reliable and low latency communication (URLLC) [1,2,3] 5G won’t match the demands of the 2030 technologies. 6G wireless networks will deliver worldwide coverage, intelligence, security, and increased spectrum/energy/cost efficiency [7, 8]. 5G may have trouble supporting large-scale heterogeneous devices. Most 5G networks save offline calculations on a server. 6G can meet real-time resource acquisition during job execution to improve network performance [13]. Storage and computational resources can be placed at the mobile edge for delay-sensitive and battery-limited devices. For uncertain situations, online CNN-based algorithms have been suggested [17]. DRL solves continuous and discrete actions. For discrete offloading choices, new actor-critic models were created [23]. DQN model [29] optimizes resource allocation and offline offloading (Table 1; Fig. 1).

Table 1. Abbreviations of key components of 5G and 6G
Fig. 1.
figure 1

Key components of 5G and 6G

Organization of Paper

This paper discusses the vision and technical objectives of 6G wireless networks. Authors have emphasized on how optimally and dynamically various resources like computing, communication, networking, storage, bandwidth can be allocated to the requesting users/devices using various deep learning algorithms. Discussion on various measuring parameters is done like throughput efficiency system capacity decision delay, which defines the system performance. Appendix gives the summary of algorithms with resources used, type of devises used, cost and complexity analysis. Also, future research directions are listed.

1.1 6G Vision

6G will provide terrestrial as well as non-terrestrial communication. 6G wireless networks cover all frequencies like Sub-6 GHz, Tera Hz and optical spectrum which will support increase in the data rates as well as it can support dense environment of devices [1]. 6G wireless networks intended to tremendously diverse, dynamic with high quality of service (QoS), with complex architecture. Unique and basic solution to this is artificial intelligence, specifically machine learning and deep learning, is upcoming solution to form a compete intelligent framework emerging as a fundamental solution to realize fully intelligent network management and organization [4,5,6].

6G will be powerful force for and rugged need for the forthcoming IoT enabled applications to overcome the 5G constraints, 6G, as growing generation, will be based on 5G. 6G will revolve human life and society which transforms human life as well as society. Figure 2. Shows the vision of 6G wireless networks and technical objectives for 6G [1, 15, 18, 26]

1.2 Technical Objectives of 6G

Detailed technical objectives are listed below (Fig. 2; Table 2).

Fig. 2.
figure 2

Vision of 6G wireless networks and technical objectives for 6G [1, 18]

Table 2. Technical objectives of 6G are listed below [18, 27]

2 Resource Allocation for 6G Wireless Networks

As applications diversify, dynamic real-time resource allocation is needed. Variety of algorithms and methodologies have been devised and assessed based on the need for networking and computational resources, research gaps, and to provide a backbone for resource allocation challenges in 6G wireless networks [16].

3C resources include physical (computing, wireless access, storage) and logical (subset of physical) resources [12]. Real-time resource allocation in dynamic tasks.

Using 6G wireless network features like low latency and high speed, fair resource allocation may enhance network performance by assigning resources dynamically in real time using deep learning algorithms. Dynamic resource assignments will improve usage. So, it's overworked. AI is 6G and beyond [7,8,9, 12]. Some common AI techniques will be used for the Resource Allocation for 6G Wireless Networks are listed below [2, 10, 11] (Table 3).

Table 3. Summary of AI techniques used for the resource allocation for 6G wireless networks

3 Summary of Deep Learning Algorithms Used for 6G Wireless Networks Resource Allocation

Yang, Helin, et al. [11] demonstrate AI-powered 6G network location and management. Handover, spectrum, mobility, and edge computing were considered. Offline training, residual networks, and feature matching graphics processing were also investigated. CPU/storage. Parallel plans are made. Lin, Mengting, and Youping Zhao [12] address AI resource management strategies. Authors discussed about radio resources as well as computing and cashing resources. They reviewed 6G wireless network issues and prospects. Deep Q-learning, deep double Q-learning, and their types are studied for resource management.

Lin, Kai, et al [17] proposed a resource-allocating algorithm for 6G-enabled massive IoT. Examining task change's impact. Authors employed a backtracking dynamic nested neural network. Stable system with faster decision-making. They cited Hu, Shisheng, et al. [19] suggested Deep Reinforcement Learning using block chain for dynamic resource sharing. Authors reduced blockchain overheads and simplified AI data gathering. Further studies are needed to reduce computational complexity while using private and public block chains.

Mukherjee, Amrit, et al. [20] proposed algorithm based on convolutional neural network (CCN) with back propagation. They analysed the allocation of resources to the discrete nodes in cluster. Wastage of resources due to redundant data reduced, improvement in the overall efficiency of network shown in the simulation. Networking and computational resources are used.

Guan, Wanqing, et al. [21] derived a deep reinforcement learning (DRL) technique that enabled service-oriented resource allocation employing several logical networks in network infrastructure that offered AI-customized slicing. Authors employed computational resources to evaluate service quality for resource allocation. Fast E2E slicing based on real-time user demand prediction was a future goal.

Kibria, Mirza Golam, et al. [22] discussed about efficient operation, optimization and control using AI and ML. Authors mentioned system can be made smart, systematic and intelligent by using big data analytics, resources mentioned were networking. They also discussed advantages and difficulties of using big data analytics. They mentioned that the processing, managing and leveraging massive amount of data is difficult and it is complex.

Liu, Kai-Hsiang, and Wanjiun Liao [23] DRL was used to deal with time-varying user requests. Energy consumption and enduring delay in multi user system is focused. Which ensured good service for tasks uploads. They did optimization jointly of computational and radio resources.

Yongshuai, Jiaxin, Xin Liu. [24] Computing, storage, and network resources were allocated using limited MDP and reinforcement learning. The authors introduced instantaneous and cumulative network slicing limits using reinforcement learning. Their strategy reduces constraint costs, they said.

Sami, Hani, et al. [25] introduced Deep Reinforcement Learning (DRL) and Markov Decision Process for allocating computing resources in dynamically changing service demands. IScaler is a revolutionary IoE resource scaling and service placement solution. It used DRL and MDP to estimate resource placement and intelligent scaling. Google Cluster traces datasets to provide simulation resources.

Bhattacharya, Pronaya, et al. [29] proposed dynamic resource allocation to solve spectrum allocation difficulties. Block chain is used to model 6G DQN-based dynamic spectrum allocation. Q learning and DQN algorithms are used to simulate system performance. Block chain improves spectrum allocation fairness by 13.57% compared to non-DQN solutions.

Li, Meng, et al. [30] propose using blockchain technology to overcome restricted computing resources and non-intelligent resource management. Authors suggest novel reinforcement learning to improve resource allocation and reduce waste. Markov decision procedure forms cloud edge collaborative resource allocation.

Waqar, Noor, et al. [31] built network access and infrastructure. The author presents a time-varying dynamic system model for HAPs with MEC servers. Decentralizing reinforcement learning-based value iteration reduces computation and communication overhead. I vehicles as intelligent agents are assessed using Q learning, deep Q learning, and double deep Q learning in terms of competency, complexity, cost, and size.

Ganewattha, Chanaka et al. [32] used deep learning to allocate wireless resources in shared spectrum bands for reliable channel forecasting. Encoder-decoder-based Bayesian models are used to model wireless channel uncertainty. University of Oulu provided channel usage and fake data. The RA technique reaches Nash equilibrium under 2N access points. The channel allocation process converges quickly, enhancing network Sam rates.

Alwarafy et al. [33] discussed the 6G network scalability and heterogeneity problems in paper. Deep reinforcement learning technique is used to solve resource allocation problem. Dynamic power allocation and multi-RAT assignment in 6G HET Nets are addressed by the suggested solution.

Gong, Yongkang, et al. [9] presented deep reinforcement learning for industrial IoT systems to allocate resources and schedule tasks. Author emphasized energy use and delay. Loading is distinguished by a new isotone action generating technique and an adaptive action updating strategy. Convex optimization solves time-varying resource allocation problems. Gain rate, batch size, RHC intervals, and training steps measure system performance.

Kasgari, Ali Taleb Zadeh, et al. [34] presented free resource allocation for the downlink wireless network (uR LLC) 6G. Under specified data limitations, achieve end-to-end high reliability and low latency. A GAN-based model enables deep reinforcement learning. To capture network conditions and run reliable systems. Proposed resource allocation model leverages multi-user OFDM (OFDMA). Deep reinforcement learning network is fed rate constraints and latency to minimize power while maintaining reliability. The proposed model reduces transition training time, according to simulations.

Sanghvi, Jainam, et al. [35] proposed edge intelligent model uses micro base station units, which are used for resource allocation MBS guarantees increased channel gain and decreased energy loss. Deep reinforcement enabled edge AI scheme supports responsive edge cache and better learning. Proposed scheme is compared with 5Gfor throughput and latency.

Xiao, Da, et al. [36] author proposed deep Q network, DQN to maximize acceptance ratio and high priority placement for uRLLC request first, that was slicing as MDP characterized. Reward function based on service prioritization defined. MDP selected for easy action in DQN, once trained DQN approximate ideal solution.

Authors have proposed in previous research work [37] a deep learning network to optimize resource allocation to base stations in a 6G communications network. The 6G network was simulated using standard 6G parameter values. On the MATLAB software, a neural network called Base Station Optimization Network (BSOnet) was created, and a dataset with varying parameter values was fed to it for training. When this network was deployed in the simulated 6G network, it consumed less power. This network is a step toward optimizing the developing 6G networks, and authors have hoped that this study will provide the scientific community with a path to further research in this area. Table 4. Shows Summary of Deep Learning Algorithms used for Resource Allocation for 6G Wireless Networks (Fig. 3).

Fig. 3.
figure 3

Deep Learning techniques used in various papers from the survey made

4 Conclusion and Future Scope

This study discusses vision, trends, and the convergence of artificial intelligence for the 6G wireless networks. The authors have informed of numerous AI algorithms that will be employed for resource allocation. A summary of several deep learning-based algorithms for resource optimization in support of varied services, emphasising certain research paths and possible solutions for 6G wireless networks. Other topics to be studied include choosing between several deep learning algorithms for different application scenarios and designing techniques to lower computing costs. Deep learning algorithms’ powerful learning and reasoning abilities can improve network performance. Deep learning techniques are being designed to improve accuracy and computing efficiency. 6G networks will require effective resource allocation to provide diversified services and huge connections. Collaborations between hardware and deep learning algorithms may be possible.