1 Introduction

Ubiquitous computing has revolutionized how we gather information about the environment by utilizing small sensors, low-power computing devices, and communication tools. This technology has found extensive applications in sports and athletics, particularly in measuring players’ performance and physical activity. For instance, motion analysis plays a significant role in running-based games like football, hockey, and rugby, enabling the study of athletes’ movements under real-life conditions. Wearable devices provide valuable insights into athletes’ health and well-being, while strategic analysis before and tactical analysis after matches in sports like football, hockey, and cricket have become commonplace. Over the past three decades, there has been a continuous increase in the use of wired and wireless microcomputers for data transfer and information sharing. This trend is leading to the development of smaller, affordable computing systems accessible to everyone [1]. Looking ahead, it is evident that the future of data and information lies in the networked world as computing devices become more affordable and widely available. This shift has led to the emergence of the Internet of Things (IoT), which focuses on connecting individual objects or things anytime, anywhere. However, the IoT faces several challenges, including security and privacy concerns, data sensing and monitoring, storage and processing capabilities, and the need for interoperability and specialized protocols [2]. To address some of these challenges, Edge Computing has come into play, tackling issues such as network latency, data processing, and aggregation. By placing computing power at the edge, closer to the source sensor, Edge Computing enables information processing on a local level. This approach works upstream and downstream streams, facilitating data transmission from source to the cloud (upstream) and from cloud to source (downstream). Sports scientists and coaches now recommend using wearable devices at the source level, which monitor health and training load. These lightweight devices, often worn on the skin’s surface, provide valuable data on variables such as sleep duration, heart rate, movement artefacts, and acceleration, with wristwatches covering most of these variables [3]. Integrating these pillars of technology is crucial to reducing latency, improving data acquisition, and streamlining data processing and cleansing, ultimately leading to more effective models for sports and athlete management. Swimming, a speed-based exercise with unique demands on physical capacity and technical proficiency, presents specific challenges due to its underwater nature and the intricacies of human movement in the water. Despite the fundamental simplicity of physical training techniques for swimming, many young swimmers still need to rely on conventional teaching methods that may need to be updated. It is essential to address this discrepancy and explore advanced training concepts and techniques to assist swimmers in achieving scientific, organized, and effective basic physical training. This challenge is of utmost importance and requires prompt attention within the swimming community to ensure swimmers’ safety and overall development [4].

Based on the above, in this paper, we will discuss the framework and approach of how to collect data from people (in this case, from athletes or players) and then process the data on an edge level to clean the data without interruption and latency. The next stage is to send the collected data to cloud computing through networking devices. The cloud computing devices will receive the dataset from Edge computing and feed it to Artificial Neural Network to build the athlete’s model (profile). The suggested model will predict the performance of athletes using different features and parameters and recommend training rest and sleep schedules. A typical framework of the system in a broader view is presented in Fig. 1. The pillar of measurement is sensing devices that sense the environment and collect data from the environment. This pillar is called measurement, and the second is processing, which is cloud computing, as depicted in Fig. 1. The last pillar of this system is analysis. In this paper, we will work on the processing and Analysis pillars. The model will bring automation in profiling using ANN Model. On the Edge level, data collection, dataset creation, and data cleaning will occur.

The following is how this paper is structured: Sect. 2 presents a thorough background research of swimmer movement strategies, stressing the limits of traditional coaching methods and investigating the possible benefits of cloud computing and AI. Section 3 describes our study approach, which includes data collecting via wearable devices and sensors and using cloud computing and AI tools for analysis. Section 4 presents the findings and comments based on the data analysis, highlighting the insights obtained and constraints discovered. Finally, Sect. 5 summarises major findings and suggests future study options, such as the use of virtual reality and augmented reality in sports instruction.

2 Background Study

Previously, athlete testing was conducted in a laboratory under laboratory-specified conditions. Smart mobile and network gateways assess the player/athlete and track their health status. This may include chronic disease and physical conditions. Health databases play a vital role in this system. The machines used in training were only for the laboratory and could not be used in the real ecological condition of the athlete. Because these machines were used in the player’s quasi-static position, the sensor sampling result was accurate; however, it does not reflect the athlete’s real-time playing field. The trade-off between accuracy and playing field conditions remains under question for a long time [5]. The small computing and electronic devices or wireless communication devices that can be incorporated into human bodies, gadgets, or clothes are known as wearables, wearable devices, or wearable technologies. These devices can be worn to human bodies and are attached to other gadgets that connect with human bodies. Some of them are invasive such as microchips. These wearable devices are different from smartphones and tablets because they give biometry and physiological data of athletes through scanning and different sensors. Since the devices are small and portable, they can easily be attached to human bodies. Therefore, these devices are hand free and consume little power [6].

The trend of wearable devices has seen steady growth over the years, with increasing annual orders from 2016 to 2021-22. However, there was a notable surge in sales from 2019 to 22, as indicated by Fig. 3. Several factors contributed to this increase, but the global Covid-19 pandemic was one significant driver. The pandemic disrupted daily life and spurred a greater focus on health and wellness. As people became more conscious of their well-being, the demand for wearable devices to monitor and track various health parameters soared. From fitness trackers to smartwatches with built-in health sensors, these devices offered individuals a way to stay connected, monitor their activity levels, and track vital signs. As a result, the name “Internet of Wearable Devices” (IoWT) developed, emphasising the incorporation of wearable technology into the linked web of devices and information sharing.

The impact of Covid-19 was felt worldwide, leading to the rapid development and deployment of chips and wearable devices on a global scale. These devices were crucial in monitoring individuals’ health, detecting symptoms, and facilitating contact tracing efforts. Wearable devices equipped with temperature sensors, heart rate monitors, and oxygen saturation measurement capabilities became essential tools in the fight against the pandemic. The increased adoption of such devices during this period contributed significantly to the growth observed in the wearable device market [7]. Research and surveys have identified eight distinct categories of products by examining the growth of wearable devices over the past decade. These include sports and fitness trackers, health monitoring devices, smartwatches, smart clothing, virtual and augmented reality headsets, wearables, smart jewellery, and implantable devices. Among these categories, sports, fitness, health, and lifestyle-related products, such as smartwatches, have consistently garnered the most attention and consumer interest [8]. These devices offer features like step counting, heart rate monitoring, sleep tracking, and stress management, appealing to individuals who prioritize their well-being and seek to lead active lifestyles. Integrating these wearable devices with smartphone applications and online platforms further enhances the user experience, allowing for personalized insights, goal tracking, and data analysis.

Edge computing has emerged as a critical technology in data processing to enable immediate and localized data generation and analysis. As described in research and surveys [9], edge computing offers a multidimensional approach to data processing and storage, bridging the physical world and the digital realm. Figure 1 showcases the diverse applications of edge computing, including its integration with Blockchain, the Internet of Things (IoT), cloud computing, and Artificial Intelligence (AI). By leveraging edge computing, wearable devices can offload some processing tasks from the cloud and perform them at the network’s edge. This enables real-time analysis and reduces latency, making it particularly advantageous for time-sensitive applications such as health monitoring, activity tracking, and sports performance analysis. Combining edge computing with other technologies like Blockchain ensures data security and integrity. At the same time, the integration with IoT enables seamless connectivity and data exchange between wearable devices and other smart devices in the environment. Additionally, cloud computing and AI play vital roles in data storage, advanced analytics, and extracting valuable insights from the vast data wearable devices collect. The interconnectedness of wearable devices, the impact of the Covid-19 pandemic, and the utilization of edge computing collectively drive innovation in the field of wearable technology. As the demand for wearable devices continues to grow, fuelled by the increasing awareness of personal health and well-being, advancements in edge computing and related technologies will further enhance the capabilities.

Fig. 1
figure 1

Visual view of Edge computation and applications [10]

The target of this research is to use real-time data processing, data management, quality of data, and communication. This communication and clean and precise data are sent to the cloud for further intelligent decisions.

The study in [11] focuses on the accuracy and precision of wearable devices used for the real-time monitoring of swimming athletes. The authors explore different wearable devices used on various parts of a swimmer’s body, including suits, caps, and the body itself. One example of a wearable device discussed in the study is an optical sensor placed on the swimmer’s cap. The sensor measures various parameters related to swimming, such as the swimmer’s stroke count and swimming speed. Smartwatches, along with optical sensors, are utilized to determine the swimmer’s heartbeat. According to the findings, contact with the skin sensors is most effective for reducing artifacts and delivering more accurate readings. Chest-worn sensors are also utilized to do this. One of the major issues of underwater ECG recording is the lack of skin contact in water. As a result, the study emphasizes the significance of collecting data in the same setting where the swimmer operates. This requires researchers to create sensors that can reliably collect data when submerged in water. The study also highlights the need to minimize interference from other sources of noise that can affect the accuracy of the data recorded.

With its distributed nature, cloud computing offers a robust solution for the data-intensive and computationally demanding requirements of Artificial Intelligence (AI). Cloud computing has become an integral component of AI, vital in storage scalability and powerful computation. The machine learning process, which involves extracting knowledge from vast databases, is often considered the foundation of AI. While machine learning provides valuable insights by analysing data and presenting solutions, deep learning has emerged as a more advanced approach, allowing for more complex data analysis and knowledge extraction [12]. The integration of AI into the cloud brings both advantages and disadvantages. Cloud services offer access to an immense amount of data, although only some may be relevant for some AI models. However, as neural networks expand over time and models learn from one another to overcome limitations, knowledge transfer becomes seamless. This continuous learning process fosters symbiosis between AI and cloud computing, enhancing performance and efficiency [13].

Swimming can significantly benefit by harnessing the power of cloud computing and AI. One of the key areas where this integration can have a transformative impact is in enhancing swimmer movement techniques. By leveraging cloud-based resources, a wealth of data collected from various sources, including wearable devices and underwater sensors, can be stored and processed efficiently. This data can then be fed into AI algorithms and neural networks to analyse and extract valuable insights regarding swimmers’ movements, stroke techniques, and overall performance. The cloud’s storage scalability allows for accumulating vast amounts of data over time, enabling AI models to learn and improve continuously. Individual algorithm constraints can be solved by sharing the learning of numerous designs, resulting in more accurate and complete evaluations of swimmer patterns of motion [14]. Furthermore, the cloud’s processing capability enables AI algorithms to evaluate this data quickly, allowing swimmers and coaches to receive real-time feedback and personalized recommendations. The combination of cloud computing and artificial intelligence improves knowledge of swimmer movement strategies and provides the door to personalized training programs and efficiency optimization. By utilizing AI-powered analytics, coaches can identify areas of improvement for each swimmer and tailor training plans accordingly. The continuous learning process enabled by the cloud ensures that the AI models evolve alongside the swimmers, adapting to their specific needs and evolving techniques.

3 Research Methodology

3.1 Framework

Implementing the basic framework begins with wearable devices, specifically incorporating a Gyroscope, muscle movement sensor, and heartbeat sensor. Data sampling is conducted at the edge level to ensure accurate results and reduce false data readings [15]. This approach minimizes the chances of noise interference, ensuring that the received data from the wearable devices is authentic and reliable [16]. The edge device acts as a receiver, collecting the data from the wearable devices and transmitting it in bulk to the cloud computing infrastructure. The cloud then processes and stores the dataset, subjecting it to analysis using a profile model based on health and motion databases. The overall framework is depicted in Fig. 2. By employing this framework, the automation of swimmers’ profiles is achieved. The motion technique, heartbeat, and muscle movements are all integrated into the swimmer’s profile [17]. The figure showcasing the AI implementation demonstrates the utilization of cloud computation on the dataset, as mentioned above. The Gyroscope plays a pivotal role in calculating the athlete’s motion and angle. Leveraging Artificial Intelligence, the motions are classified into five distinct categories: block, diving, underwater, swimming, and turning [18]. To achieve this classification, the unsupervised learning approach of the K-Mean Algorithm is employed. This algorithm facilitates the classification of motions into these predefined categories.

In the context of swimming pools, edge computing assumes a vital role. Athletes remain seamlessly connected to fully supported computers while in the water. The wearable devices maintain constant skin contact with the swimmer, ensuring the captured data is directly recorded in the storage devices connected to the edge computer. This approach enables the preservation of the integrity of the signals, minimizing noise and artefacts. Furthermore, edge computing operates with low power consumption and high computational capabilities. By providing a clean and reliable dataset to the cloud, edge computing contributes to efficient data transmission, saving valuable time and resources.

3.2 Flow of Data

The flow diagram illustrates the comprehensive system divided into three stages: Data Capturing, Data Pre-Processing, and AI Algorithm, as depicted in Fig. 2. To get the necessary information, the body of the swimmer is equipped with several sensors, including gyroscopes, pulse sensors, and contractions of muscle sensors. These mobile devices are securely fastened to the swimmer’s body to obtain precise and trustworthy data. The swimmer’s velocity and course within the water are both crucially determined by the gyroscope sensor, which also provides important movement-related data. The Gyroscope sensor captures the swimmer’s accurate rotational and vertical motions, allowing for a complete evaluation of their style and efficacy. On the other hand, during workouts or swimming sessions, the heartbeat sensor continuously checks the swimmer’s heart rate. This information is required to calculate the swimmer’s heartbeat and effort levels. By continuously tracking swimmers’ heartbeats, instructors and trainers may alter training programs to maximize effectiveness and minimize excessive effort [19].

The contractions of the muscles made by the swimmer during numerous strokes and movements are also measured and recorded by the muscle motion sensor. This data gives vital insight into the swimmer’s muscular involvement and synchronization, resulting in a determination of regions for technique and overall water effectiveness development. When these sensors are coupled, they produce a large and dynamic dataset that records many aspects of the swimmer’s performance. This data serves as the foundation for the subsequent phases of the system, including Data Pre-Processing and the AI Process.

Fig. 2
figure 2

Basic Framework

The heart rate, muscle movement, and organ movement technique of a swimmer are recorded in Fig. 3 at various exercise levels. This full tracking system’s purpose is to capture vital information that will provide important insights into the swimmer’s biological reactions and technical competence. At the beginning of the swim, the swimmer’s pulse is recorded to analyse their cardiovascular response during the workout. By analysing the swimmer’s level of effort and endurance, trainers and coaches may customize training programs and optimize performance. Simultaneously, the gadget captures the swimmer’s muscle movement, meticulously examining the movement sequences and synchronization of the muscles involved during different strokes. With this knowledge, the swimmer’s technique may be thoroughly examined to identify any shortcomings or possible advancements. Furthermore, the system records the swimmer’s organ motion manner, with a focus on internal organ alignments and synchronization through swimming movements. To reduce drag and boost the swimmer’s speed and effectiveness in the water, this component provides information on the swimmer’s body posture and hydrodynamics. The combination of these numerous data points provides a comprehensive picture of the swimmer’s physical and technical features at each level of performance. This detailed recording method enables coaches and trainers to study and optimize the swimmer’s training techniques, stroke mechanics, and performance in general to get the greatest results possible.

Fig. 3
figure 3

The positions and angles at which the swimmer moves [20]

3.3 Edge Level Computation

The procedure then continues to edge-level data processing and cleaning after the initial data collection. Edge computing is divided into two discrete stages, each serving a specific purpose. In the first phase, the gathered signals are appropriately sampled to avoid false motion or incorrect pulses. The likelihood of mistakes or wrong readings is decreased by this sampling process, which ensures that the data accurately portrays the swimmer’s motions and heartbeats. At the next step of edge-level the process, the data is categorized and processed by the K-Mean Method. This program separates information into distinct groupings, allowing for more organized and appropriate data analysis. By assigning appropriate labels or remarks to the data, trends and patterns may be discovered, facilitating the analysis of the swimmer’s efficiency. To optimize the sampling process, the sampling rate is dependent on the maximum acceptable error. This error, denoted as ε(t), represents the difference between the sensor’s received signal and the algorithm’s final output at a specific time (t). The data can be accurately captured and processed by carefully controlling the sampling rate and minimizing the error, ensuring reliable and high-quality results. Additionally, it is important to note that as the sampling rate (Ts) is increased while keeping the input constant, the continuity of time is maintained. This temporal continuity is illustrated in Fig. 4, highlighting the relationship between the sampling rate and the seamless representation of time in the data analysis process. It can be calculated using Eq. 1.

$$\varepsilon \left( {t{\rm{ }}} \right) = \left| {y - Ksin\left( {\omega t{\rm{ }}} \right)} \right|$$
(1)

In the above equation, ε(t) represents error and t represents specific time, respectively.

Fig. 4
figure 4

Example of error that results from sampling an analog signal [21]

In signal processing, the Nyquist criterion significantly determines the appropriate sample rate for accurate signal reconstruction. According to this criterion, the sample rate must be at least twice the speed of the highest frequency component present in the input signal. By adhering to the Nyquist criterion, we can ensure that the reconstructed signal faithfully represents the original input. However, it is worth noting that certain devices or systems may not be able to process signals per the Nyquist criterion. In such cases, edge computation proves to be a valuable alternative. Edge computation refers to performing computational tasks directly on edge devices or sensors closer to the source of the data. By leveraging edge computation, we can overcome the limitations of devices that cannot process signals using the Nyquist criterion. These edge devices have the necessary computational power to perform signal-processing tasks locally. It allows for real-time analysis and processing of the input signal, ensuring accurate and timely results without relying on traditional methods that require a higher sample rate. Thus, the utilization of edge computation serves as a practical solution for devices that may need to meet the requirements of the Nyquist criterion. We can achieve efficient and reliable signal processing by bringing the computation closer to the data source, even in scenarios where the Nyquist criterion cannot be met.

When using edge computing the Nyquist criteria the highest component of the frequency system is used to determine the minimum sampling rate represented by ƒmin. We define this highest component as F, now the definition is \(=2\pi F\). The worst case can be modelled calculated in Eq. 2.

$$g\left(t\right)={2}^{n-1} \text{s}\text{i}\text{n}\left(\omega t\right)$$
(2)

In the above equation, n is the number bits processed by edge computer. The maximum rate of change is (t) denoted by G, it occurs if the derivative of g(t) is maximized. Therefore, the value of G can be calculated in Eq. 3.

$$G=\frac{d}{dt}g\left(t\right)|max=\omega {2}^{n-1}.\text{c}\text{o}\text{s}(\omega t\left)\right|\omega t=0 =\omega {2}^{n-1}$$
(3)

The other problem is denoising the heartbeat signal because there are many intrusions in signal during swimming.

The windowed analysis is used to eliminate noise from the heartbeat signal. Once the QRS dominant and the position of the R-wave is known it is also called the window. The integer function is considered to be the length of this window QRS dominant scale, QRS itself, and the location of every detected R-wave, as calculated in Eq. 4.

$${d}_{i}\left(x\right)=1+\left(\gamma .\gamma QRS-1\right)\times \left[i-e-100\beta -\frac{{\left({P}_{i}-x\right)}^{2}}{f.\gamma QRS}\right]$$
(4)

In the above equation, \(\gamma\) and β are constant parameters, while \({P}_{i}\) is consider to be the \({i}^{th}\) R-wave detected, \(f\) represent the sample frequency and \(x\) is the sample index that variates using Eq. 5.

$$\frac{\left({P}_{i=1}+{P}_{i}\right)}{2}\le x\le \frac{\left({P}_{i}+{P}_{i+1}\right)}{2}$$
(5)

Now that we define the window through formula, wherein the size of the windows variates from position 1 whenever R-wave is detected its maximum, \(\gamma .\gamma\) QRS, in the middle of every two R-waved detected.

Now to remove the noise from the signal window, the window of denoising slides through noising signal, therefore, the centre of the signal is considered as mean. ECG signal behaves like discrete distance function y(x), where x is the index of sample signal this implies that denoising signal y(x), at the \({x}^{th}\) which is the sample index will be obtain in Eq. 6.

$$\stackrel{-}{y}\left(x\right)=\frac{1}{{d}_{i}\left(x\right)} {\sum }_{j=\frac{{d}_{i}\left(x\right)}{2}+x}^{\frac{{d}_{i}\left(x\right)}{2}+x}y\left(j\right)$$
(6)

3.4 Pre-Processing Algorithms

In the Nyquist algorithm using Raspberry Pi, Algorithm-1 is utilized for signal processing. The process begins with capturing signals from sensors by connecting an analogue-to-digital converter (ADC) to the Raspberry Pi. After acquiring the signal, the Nyquist frequency is calculated by determining the highest frequency component and halving the sampling frequency. Once the Nyquist frequency is determined, a digital low-pass filter is applied to the signal to remove noise above this frequency. Finally, signal decimation is performed, reducing the sample rate to twice the Nyquist frequency, which helps reduce the computational load for subsequent processing stages. This algorithm allows for efficient signal processing and noise reduction using Raspberry Pi.

Algorithm 1: PreProcessing(signals):

// Step 1: Capture Signals

 Connect ADC to Raspberry Pi

 Start capturing signals from sensors

 

// Step 2: Calculate the Nyquist Frequency

 Fs = GetSamplingFrequency() // Obtain the sampling frequency

 Fn = Fs / 2 // Calculate the Nyquist frequency

 

// Step 3: Signal Filtering

 filteredSignal = ApplyLowPassFilter(signals, Fn) // Apply a digital low-pass filter

 

// Step 4: Signal Decimation

 decimationFactor = 2 // Decimation factor (e.g., reducing sample rate to twice the Nyquist frequency)

 decimatedSignal = Decimate(filteredSignal, decimationFactor) // Reduce the number of samples in the signal

 

// Step 5: Process the Decimated Signal

 processedSignal = ProcessSignal(decimatedSignal) // Apply desired processing algorithm

 

// Step 6: Save and Transmit Processed Signal

 SaveToFile(processedSignal) // Save processed signal to a file

 TransmitOverNetwork(processedSignal) // Transmit processed signal over a network

 return processedSignal

 

According to the above equation, when the data is received in real time and sampled by the edge computer, now it’s time to classify and annotated the data for further processing. The ANN Algorithm is given below:

Algorithm 2: ANNAlgorithm(X, V):

// Step 1: Initialization

 Initialize cluster centers V randomly

 repeat:

 

// Step 2: Assignment

For each point x in X:

 Calculate the distance between x and each cluster center v in V

 Assign x to the cluster center with the minimum distance

 

// Step 3: Recalculation

For each cluster center v in V:

 Calculate the new center using Eq. 6

 until convergence

 

// Step 4: Return the updated cluster centers

return V

 
  1. Note: The pseudocode provides a general framework for the ANN (Artificial Neural Network) algorithm. You may need to implement additional functions or operations to calculate distances, assign points to cluster centers, calculate the new center, and determine convergence criteria based on your specific implementation requirements.

We applied the K-means algorithm to our dataset of 5 data points to classify the data into V points [12]. Since the K-means algorithm is an unsupervised learning algorithm, it is suitable for classifying random data without predefined annotations. We used Raspberry Pi as an edge computing device in our implementation. The sampled and classified data from the edge device is then transmitted to cloud computing for further intelligent processing. The data transmission from the edge device to cloud computing is facilitated using Wi-Fi in bulk, ensuring efficient data transfer. The data transfer timing aligns with one complete cycle of the swimmer in the pool. As the swimmer finishes one cycle, the collected data is sent to the cloud computing infrastructure. In the cloud computing environment, a deep learning algorithm called Convolutional Neural Network (CNN) is employed to learn and analyse the swimming movements of athletes. CNNs are well-suited for processing complex data such as images or time series data, and their implementation typically requires high processing power and specialized hardware like GPUs. By leveraging the scalable computation capabilities of cloud computing, the CNN algorithm can efficiently analyse the transmitted data and extract meaningful insights about the swimmer’s movements. The overall workflow and functioning of the CNN algorithm in the cloud computing environment are depicted in Fig. 5, highlighting the different layers and operations involved in processing the swimmer’s data.

Fig. 5
figure 5

Stages of Convolutional Neural Network

$$G\left[m,n\right]=\left(f*n\right)\left[m,n\right]={\sum }_{j}\sum _{k}h\left[j,k\right]f[m-j,n-k]$$
(7)

The formula for the two-dimensional convolution of two matrices h and f is given above. We will take a closer look at the notation to determine what each letter means. The output matrix value at location \((\text{m},\text{n})\) is \(\text{G}[\text{m},\text{n}]\). That is, the result of convolving matrices h and f at location (m, n). (f*n)[m, n]: This is an additional way of expressing the convolution of matrices f and n on (m, n). h[j,k]:

This is the component of matrix h, at coordinates \(\left(\text{j},\text{k}\right). \text{f} [\text{m}-\text{j}, \text{n}-\text{k}]:\) This is the component of matrix f found at coordinates\((\text{m}-\text{j},\text{n}-\text{k}).\) To compute the output matrix\(\text{G}[\text{m},\text{n}]\), we need to convolve the matrices h and f. Multiplies the corresponding elements of h and f at each point (m, n) in the output matrix and adds these products at all valid locations of h in f using the double sum symbol\(\left(\text{j} \text{k}\right)\) To do. The subscripts j and k indicate the relative placement of the components of h in f, ranging from 0 to the dimension of h. Taken together, this formula represents a fundamental image and signal processing operation used to perform filtering operations on two-dimensional data and extract properties from the data.

The Hybrid Artificial Neural Network for Swimmer Movement Clustering Algorithm-3 is intended to categorize and cluster swimmer motions based on data obtained. The programme first gathers and prepares the movement data of the swimmer. It then uses a self-organizing map (SOM) to train an unsupervised Artificial Neural Network (ANN), which groups similar motions without requiring tagged data. Representative samples from each cluster are chosen and used to train a supervised ANN, such as a multilayer perceptron (MLP), which categorizes swimmer motions and assigns them to different classes reflecting different swimming styles. A validation dataset is used to evaluate the performance of the supervised ANN. Lastly, in real-time, the hybrid ANN is used to cluster swimmer motions, using the information gained from the unsupervised and supervised ANNs to accomplish precise and effective categorization.

Algorithm-3: Hybrid Artificial Neural Network for Swimmer Movement Clustering

Input:{Data: Collection of swimmer movement data}

Output: {Clusters: Classification of swimmer movements into distinct clusters}

 

Step 1: TrainUnsupervisedANN(data):

 Initialize unsupervised ANN (SOM)

 Train SOM using the swimmer movement data

 Return the trained unsupervised ANN (SOM)

 

Step 2: SelectRepresentativeSamples(unsupervisedANN, data):

 For each cluster c in unsupervisedANN:

 Initialize an empty set R_c for representative samples of cluster c

 For each data point x in data:

 Find the nearest neuron in unsupervisedANN to x

 If the cluster of the nearest neuron is c:

 Add x to R_c

 Return the set of representative samples R

 

Step 3:TrainSupervisedANN(repSamples, labels):

 Initialize supervised ANN (MLP)

 Train MLP using the representative samples (repSamples) as input and their corresponding cluster labels as output

 Return the trained supervised ANN (MLP)

 

Step 4:EvaluateSupervisedANN(supervisedANN, validationData):

 Use validationData to evaluate the performance of the trained supervised ANN (MLP)

 Calculate and return evaluation metrics (e.g., accuracy, precision, recall, F1 score)

 

Step 5: ApplyHybridANN(newData, hybridANN):

 Let predictedCluster be the output cluster of the hybrid ANN (MLP) when newData is inputted

 Return predictedCluster

 

4 Result and Discussion

The algorithm consists of three main stages for processing and classifying swimmers’ movements. In the first stage, the signal is sampled and de-noised to ensure accurate data. Gyroscope and muscle movement sensors’ signals are captured, and procedures are used to eliminate noise and other artifacts that impair the quality of the data. Annotating and organizing the data are the main goals of the second step. This stage entails assessing the pre-processed signals and adding annotations or labels to various motions like freestyle, breaststroke, and butterfly. Furthermore, any leftover noise or undesired data is filtered away to increase the precision of the next analysis. The program’s last stage includes using an AI-based classification system to categorize the swimmers’ motions. Deep learning algorithms may be used to train an algorithm on the labeled and cleansed data. The AI program may then identify fresh motions based on the patterns and attributes retrieved from the data. By dividing the procedure into the following three categories: sampling and de-noising, cleaning and annotation, and the artificial intelligence algorithm for categorization. The algorithm offers a systematic and complete method for precisely analyzing and classifying swimmers’ motions.

4.1 Sampling of Data

To guarantee precise data sampling, the Nyquist requirements were used in edge computing. According to the Nyquist requirements, the sample rate must be no less than twice that of the most frequently occurring element in the data. By following these guidelines, the edge computer system may acquire an adequate amount of samples to properly reflect the initial signal. Whenever the data was captured at the edge level, the gyroscope performed exceptionally well. Figure 6 depicts a visual comparison of the edge-level sample with the original data. The edge-level sampling technique improved efficiency and precision in capturing the gyroscope readings, resulting in a more faithful representation of the actual motion. By leveraging the power of edge computing and applying the Nyquist criteria, the algorithm ensured that the sampled data provided a comprehensive and reliable representation of the original signal, with the gyroscope data benefiting from this approach.

Fig. 6
figure 6

Classification of all axis of the Gyroscope movement

4.2 Applying Sampled Data to Heartbeat Sensor

The application of the Nyquist criteria was also extended to the heartbeat sensor, ensuring accurate sampling of the heart rate data. Figure 7 illustrates the implementation of the Nyquist criteria for sampling the heartbeat signal. Given the dynamic nature of swimming movements and their impact on the athlete’s heart rate, capturing precise and timely heartbeat data is crucial for the Convolutional Neural Network (CNN) classification process. It was sampled at 80 samples per frequency to capture the heartbeat data effectively. This high sampling rate allowed for the accurate representation of the athlete’s heart rate fluctuations throughout various swimming movements and positions. By considering the heartbeat data alongside other sensor data, the CNN classification algorithm could make more informed decisions and improve the swimmer’s overall movement classification accuracy.

Fig. 7
figure 7

Signal sampling on high frequency component of heart beat

4.3 KNN Algorithm for Unsupervised Classification

Figure 8 shows the K-Nearest Neighbours (KNN) method to get the mean of five data points. The data distribution depicted in the Figure includes the athlete’s movement and heartbeat at each swimming exercise stage. The KNN technique is used to find the nearest neighbours of a given data point and determine their average value. With the use of this technique, we are able to record the overall pattern and characteristics of the athlete’s pulse and motion, giving us valuable information about how they performed during the different swimming stages.

Fig. 8
figure 8

Clustering data in to 5 data points using K-mean

Figure 9 shows the accuracy and error graphs for the K-Nearest Neighbours (KNN) Algorithm. The graph depicts the link between the values and their accuracy and error rates as depicted in Fig. 9a and b, respectively. The error rate and the values have an inverse connection, as seen in the graph, demonstrating that as the values grow, the error rate drops. In contrast, there is a direct proportion between the values and the accuracy, which means that as the values grow, so does the accuracy. This graph depicts the performance and efficacy of the KNN Algorithm in obtaining increased accuracy and reduced error rates.

After the data is transmitted in bulk and processed in batches, the cloud system uses Convolutional Neural Network (CNN) to train the data. Figure 10 shows the training and validation curves, which offer information about CNN’s performance during the training phase. The graph depicts the training’s iterative nature, in which the system iterates through the data to get the greatest match. Surprisingly, the system reaches an ideal state of convergence with 28,000 collected data points and 20,000 epochs, as well as 8 hidden layers of convolutions. It means that the model achieves maximal accuracy in anticipating the correct motion of the athlete, making it extremely successful in accurately identifying swimmer motions.

Fig. 9
figure 9

The error rate and accuracy rate with respect to values of KNN

Fig. 10
figure 10

Train and validation fit curve

Figure 11 depicts the algorithm’s precise real-time prediction of the athlete’s actions. This capability allows the system to be used in coaching sessions and athlete healthcare. Coaches may analyse athletes’ performance and give personalised comments and coaching using the algorithm’s capacity to recognise and categorise distinct swimmer actions with high precision. Furthermore, the system’s precise forecasts provide vital insights into players’ health and well-being, allowing proactive actions to be implemented to optimise training, prevent injuries, and improve overall sports performance.

Fig. 11
figure 11

The classified movement of the athlete in pool

The yaw, pitch, and roll angles can be used to distinguish different movements or swimming styles of a swimmer. By analysing the changes in these angles over time, we can identify specific patterns and characteristics associated with different swimming techniques.

The swimmer in the pool is shown in Fig. 11 simulation, the different moves in pool define different position of the swimmer based on angle. Here is the classification suggested based on the movement of the swimmer by the simulation model after training.

  1. 1.

    Freestyle (Front Crawl)

  • Yaw:

     With just a few lateral head movements, the swimmer maintains a nearly stable yaw angle.

  • Pitch:

     The angle of inclination changes as the swimmer inhales and exhales during the stroke. As you inhale, your head rises slightly above the water’s surface, and as you exhale, your head drops.

  • Role:

     As the body rotates along the longitudinal axis, the swimmer rotates equally left and right with each stroke.

  1. 2.

    Breaststroke

  • Yaw:

     Similar to freestyle, swimmers maintain a fairly stable yaw angle.

  • Pitch:

     During the planning phase, the pitch angle changes as the swimmer extends his body forward and brings his head closer to the surface. During the arm recovery phase, slowly raise your head.

  • Role:

     Breaststroke has a unique rotational movement. In an arm pull, the swimmer rotates the body from side to side, allowing the head to follow the movement of the body.

  1. 3.

    Butterfly

  • Yaw:

     The yaw angle can change slightly during the swimmer’s wavy body movements, especially during the dolphin push.

  • Pitch:

     During the stroke, the swimmer’s pitch angle changes dynamically. Lift your head out of the water during the arm recovery phase and the underwater phase.

  • Role:

     In the butterfly stroke, the body rotates left and right, and the head follows the movement of the body.

  1. 4.

    Backstroke

  • Yaw:

     Throughout the stroke, the swimmer maintains a virtually constant yaw angle while looking up.

  • Pitch:

     The swimmer’s angle of inclination is nearly constant because his head is near the surface of the water.

  • Role:

     In the backstroke, the body rotates evenly to the left and right while rotating along the longitudinal axis.

Based on the results of the preceding experiments, the suggested system for improving swimmer movement approaches with Cloud computing and Artificial Intelligence has numerous major benefits [22]. The system can effectively categorise and forecast swimmer motions in real-time by utilising powerful algorithms like Convolutional Neural Networks (CNN), offering coaches and players rapid feedback and analysis. Second, edge computing enables efficient data sampling and noise reduction, resulting in high-quality input for AI algorithms. Third, cloud computing’s scalability allows the system to manage massive amounts of data, allowing for continual learning and improvement over time. Furthermore, the system’s ability to capture and analyse data from wearable devices like gyroscopes and heartbeat sensors improves understanding of swimmers’ techniques and health. As a result, this integrated method provides significant data to coaches and players, enabling focused training, performance optimisation, and improved athlete care.

5 Conclusion and Future Work

In conclusion, this research paper successfully integrated Edge computation with Cloud computation to leverage the power of artificial intelligence for processing data. The utilization of Edge computation addressed the challenges of data cleaning and sampling. At the same time, the application of Artificial Neural Networks (ANN) at the Edge level facilitated the classification of swimmer movement groups. Data annotation was performed using Python, and Raspberry Pi was employed to transfer the data to cloud computing for further movement classification. The results demonstrated that the accuracy of the distribution increased with the volume of data, and the error rate decreased, indicating continuous improvement in non-supervised classification with each batch and swimmer cycle. Furthermore, the Convolutional Neural Network (CNN) performance in cloud computing exhibited improvement as the dataset continued to evolve. The system’s scalability and enhanced accuracy make it capable of training numerous swimmers in real-time while also monitoring the health condition of each athlete. However, it is important to acknowledge that the current framework has limitations regarding real-time learning and adopting new techniques. Therefore, this research lays the foundation for a scalable and accurate framework with significant potential for advancing swimmer training and monitoring. Future work should focus on integrating reinforcement deep learning to enable the system to adapt to new techniques as they become available.