Abstract
In the era of smart cities and advancing transportation technologies, predicting logistic vehicle and vehicle speed is pivotal to enhancing traffic management, safety, and overall transportation efficiency. Properly predicting vehicle and vehicle speed is critical to the interests of both road users and traffic authorities. However, accurately predicting the vehicle speed and logistics vehicle of a single trip is a difficult task. In some cases, unpredicted accidents will happen, so death cases will increase. To overcome these issues, a novel Logistic Vehicle speed detection using the YOLO (LV-YOLO) method has been introduced to detect logistical vehicles and speed using the YOLO network. The proposed framework is divided into three layers such as image acquisition, segmentation layer, and detection layer. In the image acquisition layer, a CCTV camera captures highway traffic video. The collected video is converted into frames. In the segmentation layer, the video frame is segmented using U-Net, which segments the vehicle in the video frames. The detection layer performs truck detection, and speed detection using LV-YOLO on segmented frames based on the Boxy Vehicle dataset. The simulated results show that the LV-YOLO technique maintains excellent mAP levels of 99.42%. The LV-YOLO improves the overall mAP by 1.72, 5.42, and 0.82% better than the Simple Vehicle Counting System, Real-Time Detection, and Advance YOLOv3 Model for vehicle detection, 4.81, and 2.63% better than Deep Learning and CAN protocol, and 1D-CNN speed estimation mode for speed prediction respectively.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Deep learning, which is the basis of AI technology, has made significant strides in recent years. Target tracking, semantic segmentation, and unmanned driving are some of the many aspects that target detection technology supports technically on a basic level. In systems where authorization is essential, such as parking management systems, toll payment processing systems, etc., it also invariably plays a part [1]. A recent high-tech innovation, automatic driving depends on the capacity to only find automobiles. Traffic violations, car accidents, and thefts occur often in urban areas and are captured on video by CCTV systems. The vehicle detector in a traffic surveillance system needs to be quick, precise, and dependable enough to do so [2]. Vehicle detectors are often evaluated based on their real-time detection capacity and if they have a high detection accuracy of traffic objects in bad weather [3, 23].
As the number of motor vehicles on the road grows, automatic vehicle traffic monitoring is becoming an increasingly important piece of traffic control and violation detection technology. There is a great deal of difficulty in supervising traffic in large, crowded cities [4]. The traffic flow data that is used by modern surveillance systems typically takes significant factors like speed, size, trajectory, and vehicle type into account. Additionally, modern sensors that utilize vision are used to track and record particular traffic patterns [5].
With the advancement of DNNs built on neural networks, a sophisticated subset of ML has emerged that is particularly effective at addressing issues in several complex models that are typically challenging to explain using conventional statistical techniques [6]. It recognizes a wide range of items, including automobiles, persons, license plates, and many other objects. The CNN's ability to extract key properties after training without involving humans [7, 24]. For devices with constrained storage and processing capability, the computational load is still too great to process photographs. The D-based algorithms are well known for being powerful picture identification tools. Among the several vehicle identification algorithms, CNN-based techniques have become increasingly popular [8].
The YOLO treats the vehicular perception problem as a regression problem and achieves accurate vehicle detection by classifying images using CNN. The YOLO networks can accelerate detection, instantly recognize cars that have been motion-blurred, and gather information about an object's position, category, and confidence level [9, 25]. SNN bounding box and class probabilities can be forecasted using regression-based YOLO methodologies. The YOLO model was developed as a result to expedite the process of recognizing an object and pinpointing its location in an image. It is demonstrated that YOLO-based approaches, including YOLOv3, YOLOv4 [10], and YOLOv5, maintain their superiority in terms of processing time and accuracy [18]. In this paper, a novel Logistic Vehicle speed detection using the YOLO (LV-YOLO) method has been introduced to detect the logistical vehicle speed detection using the LV-YOLO network.
The main contribution of our method is as follows:
-
In the image acquisition layer, a CCTV camera first captures the input highway traffic video, and then collected videos are converted into frames.
-
In the segmentation layer, the video frame is segmented using U-Net, which segments the vehicle in the video frames.
-
The detection layer performs logistic vehicle detection, and speed detection using LV-YOLO on segmented frames based on the Boxy Vehicle dataset.
-
Finally, AI can provide detailed information on the speed of the trucks to the traffic police for immediate action to control the speed of the truck.
The remaining portions of this work are given as follows. Section II discusses the literature review related to logistic vehicle speed detection. The proposed LV-YOLO method is presented in Section III and an experiment is conducted in Section IV to examine its viability. The conclusion of Section V is provided by an experimental result.
2 Literature survey
Recently, researchers introduced many numbers of deep learning and ML-based methods, especially to increase the precision of vehicle speed detection. This part provides an overview of some new and advanced techniques.
In 2022, Hussain et al. [11] introduced presented a hybrid phantom approach to promote privacy while reducing energy use. The results reveal that the parameters have an average consistency value of 4.2, a consistency index value of 0.066, energy usage of 1.211 J, and an average safety ratio of 59.41%.
In 2023, Farid [3] designed vehicle identification and classification using the YOLO-v5 network using freely available datasets. The redesigned YOLO-v5 framework adjusts to any challenging traffic patterns. However, the haze images have very limited visibility using the suggested method.
In 2020, Fachrie [12] introduced a straightforward vehicle classification and counting mechanism to aid humans. Implement a vehicle counting system instead of following the movements of the cars using DL techniques. With YOLOv3, it improves system performance and cuts down on time. The experimental result of the recommended model was 97.72% accurate in counting the vehicles based on video.
In 2020, Kim [13] presented a real-time vehicle recognition method based on DL algorithms employed in tunnel photos. Procedures for noise elimination and brightness smoothing are employed to locate the vehicle in the tunnel environment. After creating a training image, the vehicle region is learned using the ground truth method. In various tunnel road settings, the suggested method's detection accuracy is around 94%.
In 2020, Sudha and Priyadarshini [14] suggested the various types and numbers of vehicles in an input video, it is advised that the updated Yolov3 advanced DL model and better visual background extractor methods be employed. The average accuracy of the trial was 98.6%, and the results were captured on multiple-input, high-definition films with a monocular camera.
In 2023, Chen et al. [15] suggested an edge intelligence-based enhanced YOLOv4 vehicle detection to enhance vehicle detection performance using ECA and HRNet, to enhance segmentation precision using the original backbone network with MobileNetv2. The outcome demonstrates that the suggested strategy may raise the accuracy of vehicle detection from 82.03 to 86.22% and raise the quality of the segmentation model from 73.32 to 75.63%.
In 2023, Zaman et al. [16] different classification networks are employed to create an ensemble CNN-based improved driver facial expression recognition model. To detect the faces of the drivers, the R-CNN model is applied. It is capable of identifying faces reliably in real-time and in offline video. Face detection and DFER datasets achieve better accuracy.
In 2023, Azhar et al. [17] introduced a DL accident prediction model that combines extended features like weather, geo-coded locations, and time information. The accuracy for accident detection is raised by 8%, bringing the test accuracy to 94%. However, it does not rely on a decreasing map architectural process because data is only provided when an accident happens.
According to the above literature, various DL and ML techniques focus on vehicle detection and speed detection. Additionally, existing techniques are more time-consuming, have lower performance rates, and train loss is higher, detection model needs improvement in terms of mAP, and predicting things like speed and movement are the most difficult tasks. Therefore, the proposed LV-YOLO method is used for logistics vehicle detection, and speed detection accurately in a short time.
3 LV-YOLO methodology
In this section, introduced a Logistic Vehicle speed detection using the YOLO (LV-YOLO) method for highway truck detection, and truck speed calculation. The proposed framework is divided into three layers. Layer 1 is the image acquisition, layer 2 is the segmentation layer, and layer 3 is the detection layer. In the image acquisition layer, a CCTV camera first captures the input highway traffic video. The collected video is converted into frames. In the segmentation layer, the video frame is segmented using U-Net, which segments the vehicle in the video frames. The detection layer performs logistic vehicle detection, and speed detection using LV-YOLO on segmented frames based on the Boxy Vehicle dataset. Finally, AI can provide detailed information on the speed of the trucks to the traffic police. The proposed LV-YOLO method's general flow is depicted in Fig. 1.
3.1 Image acquisition layer
The truck on the highway is found using the logistic vehicle detection approach as shown in Fig. 1. The road area is divided into a remote region and a proximal area depending on where the camera was installed.
The CCTV cameras' real-time highway videos are first collected, and the videos that have been acquired are then turned into frames. The highway route is being traveled by a variety of vehicles, including cars, trucks, buses, motorbikes, bicycles, etc.
3.2 Segmentation layer
U-Net is used in this layer to segment the frames. To comprehend what is provided in an image at the pixel level, segmentation is used. It offers detailed information about the image as well as the vehicle's shapes and limitations. The result of image segmentation is a mask, each element of which denotes the class that a given pixel belongs to. This approach can be used to control traffic systems and has shown encouraging results using real imagery. Figure 2 shows the up-sampling and down-sampling paths that make up U-Net.
The down-sampling pipeline is made up of five convolutional blocks. The number of feature mappings is increased from 1 to 1024 in each block by using two convolutional layers. Except for the last block, downsampling is performed using max pooling, which reduces the size of the feature map from 240 × 240 to 15 × 15. Each block of the up-sampling starts with a deconvolution layer that increases the size of the feature maps from 15 × 15 to 240 × 240 while reducing the number of feature maps. By integrating the deconvolutional feature map from the encoding pass of each up-sampling block. Finally, employing a 1 × 1 convolutional layer, just two feature maps remain, representing the foreground and background segmentation, respectively.
3.3 Detection layer
In recent years, withthe development of DL in object detection, massive deep detection models have been proposed. YOLO represents a seminal advancement in object detection within the domain of computer vision. By framing object detection as a regression problem, YOLO takes a different approach. In the proposed method LV-YOLO has been proposed to detect logistic vehicles and vehicle speed based on previous data from the Boxy Vehicle dataset.
3.3.1 Boxy vehicle dataset
The Boxy vehicle dataset was used to train the LV-YOLO for image-based vehicle detection. The majority of the images in the dataset include traffic scenes and vehicles on roadways. As the input for our system is CCTV footage from traffic and road scenarios, this aligns this dataset ideally for our use case. The datasets include 1,990,806 automobiles that have been labeled by 3D-like and 2D bounding boxes over 200,000 images. During the distillation training process, the dataset's 2D ground truth annotations are utilized as hard labels. Sunny and rainy conditions at daytime, dawn, and dusk.
3.3.2 LV-YOLO for truck detection
The detection of truck, speed, and truck count using LV-YOLO. The 5th version of YOLO is used in the proposed method. The LV-YOLO network is trained using the UFPR-ALPR dataset. The most sophisticated object detection network is LV-YOLO, which is made up of three modules: Backbone, Neck, and Head. Figure 3 shows how the CSPDarknet53 architecture, with the SPP layer acting as the backbone, the PANet as the neck, and the LV-YOLO as the detecting head, is widely used by LV-YOLO. The CNN is cutting-edge and recognizes objects properly in real-time. This technique divides the entire image into components, evaluates each component using a single neural network, and predicts the bounding box and probability of each component. The predicted probabilities weight these bounding boxes. Because the neural network only runs one forward propagation loop before providing predictions, the technique "looks once" at the image. After non-max suppression, detected items are delivered.
3.3.2.1 Backbone
The Input includes three parts: truck detection, speed detection, and truck counting as shown in Fig. 3. LV-YOLO uses a CSPDarknet53 backbone network for detecting the truck on the highway based on the Boxy vehicle dataset. The input image is processed, and hierarchical characteristics are extracted from it. Convolutional layers, pooling layers, and other architectural components created to capture features at various scales make up the backbone network. These layers successively shrink the input image's spatial dimensions while deepening the feature maps. A CSP connection has been added to Darknet53 to facilitate information flow, creating CSPDarknet53. The backbone generates a hierarchy of feature maps, where each map captures features at a specific scale. These feature maps are passed on to subsequent components for further processing.
3.3.2.2 Neck
The neck is an intermediate component placed between the backbone and the detection head. LV-YOLO employs PANet as its neck structure. By combining truck characteristics at many scales from various backbone network stages, PANet aids in improving the model's capacity to identify speed. This step, which is further separated into the Calibration Factor and Speed, determines the truck's detected speed.
3.4 Calibration factor
A crucial component of speed calculation is camera calibration. It is the ratio of actual distance to pixel. Since the vehicle cannot fly into the air or go over the ground, it can be seen as a 2D-to-2D conversion. Knowing the real length of any object and dividing it by the length in pixels of the identical object in the image yields the calibration factor.
3.5 Speed calculation
Two traffic signals are used to calculate the ultimate speed in km/hr. Assuming the video is being played at 30 frames per second, the speed is changed every 0.5 s, taking into account the centroid values that are kept in an array for every 15th frame. The distance traveled by each object can now be determined using Eq. 2.
where (a, b) defines the centroid coordinates of an object in the image (i), and (e, f) defines the centroid coordinates of the same object in the image (i-15). The frame rate can be used to calculate the time required for these 15 frames. The truck's ultimate speed is determined by Eq. 3.
where T represents the time for 15 frames in hours, D represents the distance moved by the object in pixels, and C is the calibration factor in that particular region in km/pixel.
3.6 Head
The head is the last part of the object detection model, and it is in charge of predicting, calculating speed, and counting the number of vehicles related to the things seen in the image. The head of LV-YOLO consists of detection heads for multiple scales (e.g., YOLOv5s has three scales, and YOLOv5x has six scales). Each detection head is to predict the object's bounding box, object score, and class probability at each scale. Typically, anchor boxes are used by the head. These anchor boxes allow for the prediction of item size and position. As a result, the model can effectively detect objects of various sizes. The LV-YOLO is also well suited for a variety of applications, including object recognition in real-time video streams, robotics, and autonomous cars, which has helped it acquire popularity. Truck detection and speed detection using LV-YOLO shown in Fig. 4.
4 Result and discussion
The effectiveness of the LV-YOLO approach is assessed and analyzed in this section. The proposed method is implemented in MATLAB2020b on a Windows 10 PC with an Intel i3 core CPU clocked at 2.10 GHz and 8 GB of RAM. Figure 5 shows the simulation result of the truck detection, speed detection, and truck counting. The logistic vehicle speed is determined in terms of pixels moved per sec before being converted into kilometers per hour and detect the logistic vehicle using LV-YOLO based Boxy Vehicle dataset.
4.1 Performance analysis
The effectiveness of the LV-YOLO method was measured using some metrics Mean Average Precisions (mAP) using Eq. 4, Frames Per Second (FPS) using Eq. 5, and Mean square Error (MSE) using Eq. 6. The following equations are used to calculate these measurements.
where \({X}^{p}\) and \({\widehat{X}}^{p}\) are predicted and ground-truth vehicle speeds at the \({p}^{th}\) future second. The effectiveness of LV-YOLO in detecting the vehicle speed. From the Boxy Vehicle dataset, the LV-YOLO achieves an overall mAP of 99.42%.
Figure 6 and Fig. 7, the LV-YOLO achieves the highest mAP in both training and testing. Figure 7 also displays the loss. The proposed model achieves a performance based on mAP of 99.42%. It is evident how to use the LV-YOLO network to increase the detection of mAP.
4.2 Comparative analysis
The proposed method is more successful when compared to previously employed strategies. Figure 8, illustrates detected result is compared between the proposed LV-YOLO network to existing networks such as YOLOv3, and YOLOv4. It clearly shows the proposed LV-YOLO method is better than existing techniques using the Boxy vehicle dataset.
In urban areas, deep-learning-powered truck detection can help with traffic control and city planning. Truck detection using deep learning may be used for security and surveillance in a variety of scenarios, including airports, seaports, and border crossings. Urban planners and policymakers may use truck detection technologies to collect information on the movement of products and materials inside cities. This data may help drive infrastructure development decisions such as road maintenance, the building of new transportation hubs, and the application of truck-specific rules to alleviate congestion and environmental impact [23,24,25].
Figure 9, illustrates the comparison of the proposed LV-YOLO network to the YOLOv3, and YOLOv4 networks it gains less mAP and FPS. LV-YOLO achieves a high mAP range of 99.42%. The mAP obtained by YOLOv3, and YOLOv4 is 96.32%, and 97.36%. The FPS obtained by YOLOv3, YOLOv4, and LV-YOLO is 20, 65, and 89 respectively.
Table 1 shows the performance comparison of vehicle detection and speed detection between LV-YOLO and existing methods. The LV-YOLO method maintains a high mAP of 99.42%. In vehicle detection comparison to CNN-TCAM, CNN, 1D-CNN, and EMD-Informer, the LV-YOLO technique improves overall mAP by 1.72%, 5.42%, 0.82% and 0.96%. According to Table 1, the proposed LV-YOLO achieved a high mAP value of 99.42% with low MSE and high inference speed (1.28 s per signal). The speed mentioned here is inference speed which is observed from the edge vehicle signal to the deep learning algorithm. The LV-YOLO takes 1.28 s per signal and this speed is better than all the existing techniques with good accuracy of 99.42% respectively. Compared to existing methods the proposed method yields better mAP and low MSE respectively.
5 Conclusion
In this research, an LV-YOLO method to detect the logistical vehicle and speed detection. The collected highway video is converted into frames. The video frame is segmented using U-Net, it segments the vehicle in the video frames. The detection layer performs logistic vehicle detection, and speed detection using LV-YOLO based on the Boxy Vehicle dataset. The LV-YOLO model was evaluated based on mAP, and FPS. The mAP result of the LV-YOLO method maintains excellent mAP levels of 99.42%, and FPS level of 89. In comparison, the mAP obtained by YOLOv3, and YOLOv4 is 96.32%, and 97.36%. The FPS obtained by YOLOv3, YOLOv4, and LV-YOLO is 20, 65, and 89. The LV-YOLO method improves the overall mAP by 1.72%, 5.42%, and 0.82% better than the Simple vehicle counting system, Real-time detection, and Advance YOLOv3 model respectively. The simulation outcomes show that the LV-YOLO method detects the vehicle speed and truck successfully. The LV-YOLO can be an effective method for enhancing both logistics vehicle recognition and speed detection while using a similar amount of model parameters and computational complexity as the LV-YOLO method. Currently, the proposed method can only detect vehicles and not classify them. In the future, to detect district road vehicle speeds, and count and classify them into various categories with more accurate results using an advanced YOLO network.
Data availability
Data sharing is not applicable to this article as no new data were created or analyzed in this Research.
References
Chen, Y., Li, Z.: An effective approach of vehicle detection using deep learning. Comput. Intell. Neurosci. (2022). https://doi.org/10.1155/2022/2019257
Appathurai, A., Sundarasekar, R., Raja, C., Alex, E.J., Palagan, C.A., Nithya, A.: An efficient optimal neural network-based moving vehicle detection in traffic video surveillance system. Circuits Syst. Signal Process. 39, 734–756 (2020). https://doi.org/10.1007/s00034-019-01224-9
Farid, A., Hussain, F., Khan, K., Shahzad, M., Khan, U., Mahmood, Z.: A fast and accurate real-time vehicle detection method using deep learning for unconstrained environments. Appl. Sci. 13(5), 3059 (2023). https://doi.org/10.3390/app13053059
Wu, Q., Li, X., Wang, K., Bilal, H.: Regional feature fusion for on-road detection of objects using camera and 3D-LiDAR in high-speed autonomous vehicles. Soft. Comput. 27(23), 18195–18213 (2023). https://doi.org/10.1007/s00500-023-09278-3
Alhuthali, S.A.H., Zia, M.Y.I., Rashid, M.: A simplified traffic flow monitoring system using computer vision techniques. In: 2022 2nd international conference on computing and information technology (ICCIT). IEEE, 167–170 (2022). https://doi.org/10.1109/iccit52419.2022.9711550
Othmani, M.: A vehicle detection and tracking method for traffic video based on faster R-CNN. Multimed. Tools Appl. 81(20), 28347–28365 (2022). https://doi.org/10.1007/s11042-022-12715-4
Ammar, A., Koubaa, A., Boulila, W., Benjdira, B., Alhabashi, Y.: A multi-stage deep-learning-based vehicle and license plate recognition system with real-time edge inference. Sensors 23(4), 2120 (2023). https://doi.org/10.3390/s23042120
Rafique, A.A., Al-Rasheed, A., Ksibi, A., Ayadi, M., Jalal, A., Alnowaiser, K., Meshref, H., Shorfuzzaman, M., Gochoo, M., Park, J.: Smart traffic monitoring through pyramid pooling vehicle detection and filter-based tracking on aerial images. IEEE Access 11, 2993–3007 (2023). https://doi.org/10.1109/access.2023.3234281
Gu, Y., Si, B.: A novel lightweight real-time traffic sign detection integration framework based on YOLOv4. Entropy 24(4), 487 (2022). https://doi.org/10.3390/e24040487
Gayathri, K., Ajitha Gladis, K.P., Angel Mary, A.: Real time masked face recognition using deep learning based yolov4 network. Int. J. Data Sci. Artif. Intell. 01(01), 26–32 (2023). https://doi.org/10.1145/3484824.3484903
Hussain, T., Yang, B., Rahman, H.U., Iqbal, A., Ali, F.: Improving source location privacy in social internet of things using a hybrid phantom routing technique. Comput. Secur. 123, 102917 (2022). https://doi.org/10.1016/j.cose.2022.102917
Fachrie, M.: A simple vehicle counting system using deep learning with YOLOv3 model. J. RESTI (Rekayasa Sistem Dan Teknologi Informasi) 4(3), 462–468 (2020). https://doi.org/10.29207/resti.v4i3.1871
Kim, J.: Vehicle detection using deep learning technique in tunnel road environments. Symmetry 12(12), 2012 (2020). https://doi.org/10.3390/sym12122012
Sudha, D., Priyadarshini, J.: An intelligent multiple vehicle detection and tracking using modified vibe algorithm and deep learning algorithm. Soft. Comput. 24, 17417–17429 (2020). https://doi.org/10.1007/s00500-020-05042-z
Chen, C., Wang, C., Liu, B., He, C., Cong, L., Wan, S.: Edge intelligence empowered vehicle detection and image segmentation for autonomous vehicles. IEEE Trans. Intell. Transp. Syst. (2023). https://doi.org/10.1109/tits.2022.3232153
Zaman, K., Zhaoyun, S., Shah, B., Hussain, T., Shah, S.M., Ali, F., Khan, U.S.: A novel driver emotion recognition system based on deep ensemble classification. Complex Intell. Syst. (2023). https://doi.org/10.1007/s40747-023-01338-3
Azhar, A., Rubab, S., Khan, M.M., Bangash, Y.A., Alshehri, M.D., Illahi, F., Bashir, A.K.: Detection and prediction of traffic accidents using deep learning techniques. Clust. Comput. 26(1), 477–493 (2023). https://doi.org/10.1007/s10586-021-03502-1
Karthi, S.P., RL, A.R., Buvanesh, K.K., Amalan, E. and Harishkumar, S.: Electric vehicle speed control with traffic sign detection using deep learning. In: 2022 international conference on advanced computing technologies and applications (ICACTA). IEEE, 1–6 (2022). https://doi.org/10.1109/icacta54488.2022.9753624
Jiao, X., Wang, Z., Zhang, Z.: Vehicle speed prediction using a combined neural network of convolution and gated recurrent unit with attention.
Li, Y., Wu, C., Yoshinaga, T.: Vehicle speed prediction with convolutional neural networks for ITS. In: 2020 IEEE/cic international conference on communications in China (ICCC workshops). IEEE, 41–46 (2022)
Cvijetić, A., Djukanović, S., Perunicic, A.: Deep learning-based vehicle speed estimation using the YOLO detector and 1D-CNN. In: 2023 27th international conference on information technology (IT). IEEE, 1–4 (2023)
Tian, X., Zheng, Q., Yu, Z., Yang, M., Ding, Y., Elhanashi, A., Saponara, S., Kpalma, K.: A real-time vehicle speed prediction method based on a lightweight informer driven by big temporal data. Big Data Cogn. Comput. 7(3), 131 (2023)
Muthukumaran, N., Kumar, C., Joshua Samuel Raj, R., Andrew Roobert, A.: Grey wolf optimized Pi controller for high gain SEPIC converter for PV application. In: 2023 international conference on sustainable communication networks and application (ICSCNA), Theni, India, 1032–1035 (2023). https://doi.org/10.1109/ICSCNA58489.2023.10370322.
Ramaswamy, S., Joe Patrick Gnanaraj, S., Chandra Sekar, K., Muthukumaran, N.: Analysis of distribution line in link with substation using gsm technology. In: 2023 international conference on sustainable communication networks and application (ICSCNA), Theni, India, 526–528 (2023). https://doi.org/10.1109/ICSCNA58489.2023.10370197
Prabhu, M., Revathy, G., Raja Kumar, R.: Deep learning based authentication secure data storing in cloud computing. Int. J. Comput. Eng. Optim. 01(01), 10–14 (2023)
Acknowledgements
The authors would like to thank the reviewers for all of their careful, constructive and insightful comments in relation to this work.
Funding
No Financial support.
Author information
Authors and Affiliations
Contributions
The authors confirm contribution to the paper as follows: Study conception and design: Gopika rani N, Hema priya N, Ahilan A, Muthukumaran N; Data collection: Gopika rani N, Hema priya. N, Ahilan A, Muthukumaran N; Analysis and interpretation of results: Gopika rani N, Hema priya N, Ahilan A, Muthukumaran N; Draft manuscript preparation: Gopika rani, Hema priya N, Ahilan A, Muthukumaran N; All authors reviewed the results and approved the final version of the manuscript.
Corresponding author
Ethics declarations
Conflict of interest
This paper has no conflict of interest for publishing.
Ethical approval
My research guide reviewed and ethically approved this manuscript for publishing in this Journal.
Human and animal rights
This article does not contain any studies with human or animal subjects performed by any of the authors.
Informed consent
I certify that I have explained the nature and purpose of this study to the above-named individual, and I have discussed the potential benefits of this study participation. The questions the individual had about this study have been answered, and we will always be available to address future questions.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Rani, N.G., Priya, N.H., Ahilan, A. et al. LV-YOLO: logistic vehicle speed detection and counting using deep learning based YOLO network. SIViP 18, 7419–7429 (2024). https://doi.org/10.1007/s11760-024-03404-w
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11760-024-03404-w