Abstract
For traffic and security surveillance, moving object detection and segmentation are critical. Detecting moving objects in dynamic environments is more difficult than it is in static environments. In this paper, all the research articles published between 2011 and 2022 in IEEE Xplore, ScienceDirect conferences, and various journals were referenced for a systematic review on identifying different objects from images/videos taken under adverse environmental conditions. We used different tags and keywords to search for papers on the topic under study. All the papers were studied, the proposed techniques were analyzed, and information was gathered. On the basis of this analysis, we present some future prospects for the area under study. We also present a survey of various techniques proposed by various researchers to detect moving objects under various environmental conditions over a period of time.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
1 Introduction
Numerous resources are available from various digital and nondigital sources. Over time, scholars from different areas have started exploring various fields of study to gather information from different fields. For traffic surveillance and security surveillance, moving object detection and segmentation are critical. Detecting moving objects in dynamic environments is more difficult than it is in static environments. The inclination of research scholars and industries to transform the quality and quality of unstructured data has increased over time. Many data are available in the field of object identification and classification for researchers dig into to investigate various techniques for the identification of objects.
Furthermore, thanks to the immense growth and availability of online resources, users have a lot of exposure to various ideas, approaches, opinions, and recommendations on various methods. Such ample numbers of data open up new opportunities for scholars to analyze the existing techniques in their area of interest. Image processing has various applications in different fields of study. Among all these applications, the contributions of image-processing techniques and computer vision in the field of security and surveillance are remarkable. Moving object surveillance is an active area of research that detects, identifies, and tracks objects from a moving sequence of images. Objects in the video frame sequence are identified for video sequencing [1]. In order to track objects from moving vehicles or video sequencing of moving vehicles, it is important to detect an object that appears first in the video sequence or detect objects from every image frame of a video sequence. Using a good-quality, high-intensity video camera is required to capture and acquire inputs for high effectiveness and clear object detection. The identification of objects in videos follows three important steps:
-
1.
Detection of object of interest
-
2.
Tracking objects from each image frame
-
3.
Identification of object behavior in an image
Morphology provides the operations for analyzing objects of different forms and shapes. This aids in object analysis and recognition. The required information is extracted from the image for analysis to help yield an improved image. Morphological procedures are contingent on the comparative assembling of pixel values [2]. The structuring component supports determining the method into which the structuring element fits to identify an image.
1.1 Morphological Operations on Image
-
1.
Erosion: In this process, boundaries are eroded away. This shrinks the object. The mathematical erosion of image I can be defined as follows:
Erosion reduces the size of objects by etching the object borders. Structural elements pass through all the pixels of the image.
-
2.
Dilation: This operation allows objects to expand, filling in small holes and connecting disjointed objects.
For descriptor extraction, grayscale image representations are employed. Such representations simplify and minimize the computing requirements [21,22,23] (Fig. 1).
1.2 Impact of the Environment on Objects
Humans infer object-to-object interactions, part-to-whole object hierarchies, object properties, and 3D scene structures in addition to identifying and finding items in a scene. Having a better understanding of situations would help with applications such as robotic interactions, which often need information beyond item identification and position. This requires not only scene perception but also knowledge of the physical surroundings. An image has pixels, and every pixel of an image has a numerical intensity value. In order to classify images, it’s very important to identify objects that are static or moving, and they must be clearly visible. Many factors influence the visibility of the image. One such key factor includes environmental conditions such as fog, rain, snow, dust, and so on (Fig. 2).
The environment also significant affects the identification of images. Images taken in hazy or foggy weather are difficult to identify because of their unclarity. Fog and dust, in particular, drastically diminish visibility distance. Because of light dispersion and attenuation, the color of nearby objects seems to be quite similar, with poor saturation. Under such circumstances, it is difficult to distinguish between objects because the edges between the background and the item in the foreground become blurred. Many cameras are installed in modern automobiles, and they are used for a wide range of purposes. The detection of fog from images captured by a photographic camera mounted on a vehicle is a difficult task that could be useful in a variety of situations [3]. So far, methods have focused on the attributes of nearby items in the image, such as lane markings, traffic signs, vehicle backlights, and approaching vehicle headlights. In contrast to all these previous studies, some researchers suggest adopting methods that utilize image descriptors and take a cataloging approach to distinguish between images with fog and those without fog.
2 Methodology and Research Description
Image-processing techniques are used to visibly improve image appearance. These techniques help to improve the interpretation of an image by a human or an automated system. Image-processing techniques may also be used to identify images and vehicles under adverse weather circumstances, such as fog, rain, hail, etc. Table 2 presents research conducted in similar fields.
Some researchers have proposed conducting a literature review on studies that aimed to identify objects. For the analysis and identification of objects, the Scopus database has been selected because this database indexes a wide range of engineering literature from conferences and journals. The scholars also can explore a wide range of research articles from the Scopus database. For this study, papers from 2011 to 2022 are included for analysis. For the selection of research papers, we focused on keywords such as “object detection,” “object recognition,” “moving vehicles in the fog,” “vehicle in the fog,” “morphology,” “neural network,” and “deep learning.” We also used Boolean connectors “AND,” “OR,” and the symbol “+” to influence the search so that more specific and meaningful data could be gathered. In order to retrieve the desired research papers, a search using keywords such as “object identification,” “fog,” and “moving vehicle” was conducted; the search yielded 7300 research papers in the first attempt. To refine the results, the second phase of the search for studies published in journals was limited to 1100 publications. Further, in the third phase, we focused primarily on object identification in fog, because of which the number of research papers dropped to 423 (Figs. 3 and 4).
3 Findings and Results
This section is divided into three sections: year-by-year statistics, journal-by-journal statistics, and theme-by-theme reporting. In the first subsection, the study displays the year-by-year distributions of publications and a list of periodicals. In the second subsection, the study shows the high-frequency words from the title and author keywords of the publications studied. In the third subsection, the study shows diverse themes across several image-processing approaches. Various functional factors, such as contrast-sensitive images, brightness, transparency, texture gradient, and light, affect the visibility of an image display. Further, a few more key performance indicators, also known as image-quality factors, influence image quality and visibility, such as image sharpness, noise, dynamic range, color accuracy, alterations, homogeneity, blaze, artifacts, compression, and links. In addition to these factors, a few environmental factors also influence image visibility. Such factors include rain, hail, dusty wind, and fog. Various researchers have carried out significant research on all these factors and their effects, and they have proposed suggestive and corrective techniques to overcome all these problems, except for fog.
Even though a few researchers have conducted research on object identification in foggy weather, they have not put any significant weight on the identification of objects in moving vehicles in fog or video sequencing of the vehicles moving in fog. This leaves a huge gap to fill. We have searched through research papers in this field and have identified and focused on their techniques for identifying objects in fog.
This section is divided into three subsections: year-by-year statistics, journal-by-journal statistics, and theme-by-theme reporting. The first subsection displays the year-by-year distributions of research articles and a list of periodicals. The study shows high-frequency words from the title and author keywords of the articles studied in the second subsection. The study shows the diverse themes across several image-processing approaches in the third subsection. Various researchers have proposed different methods for identifying objects in moving vehicles so that the vehicles themselves can also be identified (Figs. 5 and 6).
All the referenced papers were analyzed, read, and categorized according to vehicle identification and fog estimation. Various researchers have proposed different techniques for vehicle identification under adverse environmental conditions, such as rain, haze, and fog. Researchers classify fog density as a visibility feature of a vehicle preceding another vehicle and the distance to that other vehicle. Researchers proposed a method to recognize daytime images in dense fog and applied Fourier transformation with global features. The proposed method was applied to many images. Here, 96% of fog-free photographs were categorized as fog-free, whereas 93% of fog images were classified as fog images. The researchers used annotated photos with fog values, as shown in Table 1. The data did not include profiles of roads [4]. To categorize fog, various calculations have been performed on the basis of visibility distance.
While following the proposed methodology, researchers assessed fog images and fog-free images. Afterwards, they proposed using a semisupervised convolutional neural network. A sparse Laplacian filter was applied to gather information about the vehicles. The SoftMax classifier layer was used as the output layer on the labeled vehicles. We worked with the BIT-Vehicle data set. This data set includes 9850 images to test the proposed method. Only 10% of the images were nighttime images [3, 4] (Fig. 7).
First, foreground objects are extracted from the video. After this, the hierarchical multi-SVMs (support vector machines) method is applied for vehicle classification. For final precision, a voting-based correction scheme is used. Vehicle classification is achieved in complicated traffic scenes by taking the proposed approach. Singh et al., Zhuo et al., Murugan et al., Chowdhury et al. presented a vehicle classification technique that is based on neural networks. In the pretraining stage, GoogLeNet is applied to the ILSVRC-2012 data set, and after fine-tuning, a vehicle data set of 13,700 images is used for the final classification [5,6,7,8]. Liu et al. have proposed techniques to detect the number of vehicles on the road in real time. They applied image subtraction on foreground and background images and used image binarization, a method of counting objects that results in a faster computing technique. Wang et al. have proposed classifying images on the basis of deep learning and generative adversarial networks (GANs) [9, 10]. Jyothi et al. have proposed using a high-accuracy convolutional neural network (CNN) technique for vehicle classification. The vehicles in focus were cars and trucks [11]. Morphological operations were applied in vehicle detection and identification. Video sequencing of vehicles was reviewed by Chandrika et al., and frames were extracted from it [12] (Fig. 8).
Kalyan et al., Shyamala et al., Şentaş et al., Hedeya et al., Zahra et al., Jagannathan et al., Miclea et al. have applied image binarization and Sobel edge detection. Morphological operations were applied to the image to identify objects. Researchers used their own video data set. They tested the SVMs classifier and tiny-YOLO on a data set, and their paper proposed an adaptive histogram equalization and the Gaussian mixture model. Researchers have implemented a technique to enhance vehicle image quality and detected vehicles from denoised images [13,14,15,16,17,18,19]. Kim et al. have proposed an efficient technique for visibility measurement under foggy weather conditions [20]. International Commission on Illumination (CIE) defined meteorological visibility distance as the distance beyond which a black object of adequate size with a contrast of less than 5% can be seen. Jiang et al. have applied the Canny–Deriche filter [21]. The purpose of this research was to propose a framework for recovering the contrast of photographs captured from a moving vehicle. Nam and Nam first computed the weather conditions, then inferred the scene structure, which was refined during the restoration process [22]. The proposed visibility enhancement algorithm was not designed for road photos [23]. Hautière et al. proposed techniques in which the fog region is segmented using the calculation from the direction charts. They also computed fog density and used a method that restored the contrast and assumed that the road was flat, to detect vertical objects [24]. Abbaspour et al. used a technique on a single in-vehicle camera. This method has better performance in terms of accuracy and speed to detect fog density from images [25]. Hautière et al. proposed image improvement techniques appropriate under daytime fog conditions in differential geometries, where the partition of unity was the base of this proposed method. This model was the most suitable for contrast restoration under foggy conditions [26]. These researchers suggested an algorithm for reducing the turbidity of an image. In their method, they assumed that an image has a reference intensity level and a characteristic intensity level. A low-pass filter was used to obtain the reference intensity level: intensity level = original intensity level – reference intensity level [27] (Fig. 9).
Negru et al. conducted an experiment to contrast an image by providing quantifiable proof that road safety is increased by Advanced Driver Assistance Systems (ADAS). Next, using a modified Piéron’s rule as a foundation, a quantitative model was developed for target visibility (Vt), which is calculated from onboard camera images [28]. Other researchers presented a pixel-based technique to eradicate haze artifacts from images by using a single image-based dehazing framework. Halmaoui et al. recommended conducting a haze density analysis to determine the level of atmospheric light. A transmission map can then be estimated and refined by using a bilateral filter [29] (Fig. 10).
For hazy photos and videos, Kim et al., Su et al. have proposed a fast and optimal dehazing method. The contract term and the data loss term were combined into a cost function by these researchers. The suggested approach improves the contrast and maintains information by minimizing the cost function. This method is expanded to real-time video dehazing from the static-image-dehazing algorithm [31, 32]. Sharma and Arya provided a technique for retrieving image data from a single blurry image. The dark-channel-prior (DCP) algorithm has a tendency to underestimate bright region transmission. The predicted value of the shady network of a hazy image was utilized as an estimation of this offset to adjust the transmission because the intensity in a dark channel that was influenced by haze created the same offset [33].
For the real-time processing of haze removal from high-definition videos, a GPU-accelerated parallel computing solution was proposed [34]. For real-time processing, Rai et al. proposed a single image haze reduction approach that implements hardware. The suggested method uses computationally efficient image-processing techniques [35]. This research investigated aureole pieces generated by scattering and nonuniformly dispersing lighting in low-light foggy circumstances. To reduce the influence of multiple scatterings, an image was subdivided into a halo and scene layer. Next, following the Retinex theory, they calculated the spatially variable ambient light [36]. In the transmission map, they employed the mean shift segmentation technique to separate the sky areas from the foreground, which were obtained by using the dark-channel-prior approach. Afterward, they used guided image filtering to smooth the map by separately increasing the brightness of each sky region in the transmission map [37]. Yuan et al. proposed dark channels prior to masking the sky regions focused on road edge recognition, and dehazing was similarly focused—together resulting in improved visibility under foggy conditions [38].
In their study, Mou et al. proposed an algorithm consisting of sky segmentation and area-wise medium-transmission-based image dehazing [39]. Another approach was proposed that first detects the sky and divides the image into sky and nonsky regions and then independently estimates the transmissions of the two sections, followed by a refining step [40]. This is an efficient strategy for improving fog-degraded traffic images. The fog-degraded image is divided into blocks. The block with the least amount of sparsity is chosen to compute the local transfer function [41]. To simulate the mathematical model for the fog, Hu et al. used a deep neural network [42].
The edge-sharpening cycle-consistent adversarial network was proposed as a generative adversarial network, namely ESCCGAN [43]. To solve the performance limitations of using an atmospheric disintegrating archetypal-based method, a residual-based dehazing net was developed [44]. The generative adversarial network (GAN) dehazing method has been used to dehaze image areas. This technique considers the many degrees of haze concentration that need to be adjusted while preserving the original image’s details [45]. The suggested algorithm employs a supervised machine-learning technique to approximate the transmission medium’s pixel-wise extermination factors and uses a unique compensation scheme to correct the erroneous enlargement of white objects after dehazing.
Feng et al. used an edge-preserving maximum reflectance prior (MRP) method to reduce the color effect of hazy photos taken at night. The transmission map was then obtained by feeding the hazy image with no color effect into the self-encoder net [46]. Feng et al. continued their research by considering the following parameters: contrast, intensity, image noise, image resolution, illumination, heavy occlusions, visibility distance, classification accuracy, speed, changes in illumination, aspect ratio, compactness, the accuracy of a crossing vehicle, hue, precision, accuracy, light angle, size variety, relative width, length and area, Histogram of Oriented Gradients (HOG), precision, vehicle class, classification accuracy, computation complexity, processing speed, reliability, standard deviation, entropy, visibility distance, saturation value, chroma, visible threshold, attenuation, meteorological visibility distance, object distance, camera response, pitch angle sensitivity, inflection point, processing time, reaction time, and radiance. Our proposed review is carried out on the basis of the effect that the parameters have on the input image under consideration.
Table 2 highlights the implemented techniques and the key points extracted from all the referenced research papers (Fig. 11).
4 Conclusion
According to the key findings of all the research papers referenced in the survey, a flexible fog-estimating method is required. A hybrid algorithm is not used/proposed by any of the researchers to detect the edges of moving/dynamic objects. In addition to this, none of the researchers focused on identifying moving vehicles in foggy weather, which leaves a huge gap to fill. According to the research conducted to date, researchers have proposed many techniques and methods to identify objects but have not focused much on the identification of moving objects. The few researchers who considered moving object identification did not include the impact of the environment on moving vehicles, especially the impact of foggy weather on the visibility and identification of moving vehicles. Therefore, an algorithm that can efficiently identify moving objects in fog needs to be developed.
References
Pavlic, M., Belzner, H., Rigoll, G., & Ili, S. (2011). Image based fog detection in vehicles. IEEE.
Pavlic, M., Belzner, H., Rigoll, G., & Ili, S. 2012. Image based fog detection in vehicles. In Intelligent Vehicles Symposium Alcalá de Henares, SCI indexed.
Dong, Z., Wu, Y., Pei, M., & Jia, Y. (2015). Vehicle type classification using a semisupervised convolutional neural network. IEEE Transactions on Intelligent Transportation Systems, SCI Indexed, 16(4), 2247–2256.
Fu, H., Ma, H., Liu, Y., & Lu, D. (2016). A vehicle classification system based on hierarchical multi-SVMs in crowded traffic scenes. Neurocomputing, SCI indexed, 211, 182–190.
Singh, R., Singh, S., & Kaur, N. (2016). A review: Techniques of vehicle detection in fog. Indian Journal of Science and Technology, Zoological Record, 9(45). https://doi.org/10.17485/ijst/2016/v9i45/106793
Zhuo, L., Jiang, L., Zhu, Z., Li, J., Zhang, J., & Long, H. (2017). Vehicle classification for large-scale traffic surveillance videos using convolutional neural networks. Machine Vision and Applications, SCI, 28(7), 793–802.
Murugan, V., & Kumar, V. R. (2018). Automatic moving vehicle detection and classification based on artificial neural fuzzy inference system. Wireless Personal Communications, SCI, Springer, 100(3), 745–766.
Chowdhury, P. N., & Ray, T. C. (2018). A vehicle detection technique for traffic management using image processing. In International Conference on Computer, Communication, Chemical, Material and Electronic Engineering (IC4ME2), SCI.
Liu, W., Luo, Z., & Li, S. (2018). Improving deep ensemble vehicle classification by using selected adversarial samples. Knowledge-Based Systems, SCI, 160, 167–175.
Wang, X., Zhang, W., Wu, X., Xiao, L., Qian, Y., & Fang, Z. (2019). Real-time vehicle type classification with deep convolutional neural networks. Journal of Real-Time Image Processing, SCI, 16(1), 5–14.
Jyothi, R. A., Babu, R. K., & Bachu, S. (2019). Moving object detection using the genetic algorithm for real times transportation. International Journal of Engineering and Advanced Technology (IJEAT), 8(6).
Chandrika, R. R., Ganesh, G. N. S., & Raghunath, K. M. K. (2020). Vehicle detection and classification using image processing. IEEE Xplore, SCI.
Kalyan, S. S., Pratyusha, V., Nishitha, N., & Ramesh, T. K. (2020). Vehicle detection using image processing. In IEEE International Conference for Innovation in Technology, SCI.
Shyamala, A. (2020). Certain investigations on moving vehicle detection and classification using soft computing techniques. shodhganga.
Şentaş, A., Tashiev, İ., & Küçükayvaz, F. (2020). Performance evaluation of support vector machine and convolutional neural network algorithms in real-time vehicle type and color classification. Evolutionary Intelligence, SCI, 13(1), 83–91.
Hedeya, M. A., Eid, A. H., & Abdel-Kadar, R. F. (2020). A super learner ensemble of deep networks for vehicle-type classification. IEEE Access, SCI, 8, 98266–98280.
Zahra, G., Imran, M., Qahtani, A. M., Alsufyani, A., Almutiry, O., Mahmood, A., & Alazemi, F. E. (2021). Visibility enhancement of scene images degraded by foggy weather condition: An application to video surveillance. Computers, Materials & Continua Tech Science Press, SCI. https://doi.org/10.32604/cmc.2021.017454
Jagannathan, P., Kumar, S. R., Frnda, J., Divakarachari, P. V., & Subramani, P. (2021). Moving vehicle detection and classification using Gaussian mixture model and ensemble deep learning technique. Hindawi Wireless Communications and Mobile Computing, SCI. https://doi.org/10.1155/2021/5590894
Miclea, R. C., Ungureanu, V. I., Sandru, F. D., & Silea, I. (2021). Visibility enhancement and fog detection: Solutions presented in recent scientific papers with potential for application to Mobile systems. Sensors, SCI, 21, 3370. https://doi.org/10.3390/s21103370
Kim, K., Kim, S., & Kim, K. S. (2018). Effective image enhancement techniques for fog-affected indoor and outdoor images. IET Image Processing Research Article, SCI. https://doi.org/10.1049/iet-ipr.2016.0819
Jiang, Y., Sun, C., Zhao, Y., & Yang, L. (2017). Fog density estimation and image defogging based on surrogate modeling for optical depth. IEEE Transactions on Image Processing, SCI. https://doi.org/10.1109/TIP.2017.2700720
Nam, Y., & Nam, Y. C. (2018). Vehicle classification based on images from visible light and thermal cameras. Journal on Image and Video Processing, SCI. https://doi.org/10.1186/s13640-018-0245-2
Pesek, J., & Fiser, O. (2013). Automatically low clouds or fog detection, based on two visibility meters and FSO. In 13th Conference on Microwave Techniques COMITE.
Hautière, N., Tarel, J. P., & D. Aubert (2007). Towards fog-free in-vehicle vision systems through contrast restoration. In Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 0–7. https://doi.org/10.1109/CVPR.2007.383259
Abbaspour, M. J., Yazdi, M., & Masnadi-Shirazi, M. (2016). A new fast method for foggy image enhancement. In 2016 24th Iranian Conference on Electrical Engineering (ICEE) 2016, pp. 1855–1859. https://doi.org/10.1109/IranianCEE.2016.7585823
Hautière, N., Tarel, J. P., Halmaoui, H., Brémond, R., & Aubert, D. (2014). Enhanced fog detection and free-space segmentation for car navigation. Machine Vision and Applications, 25(3), 667–679. https://doi.org/10.1007/s00138-011-0383-3
Negru, M., & Nedevschi, S. (2013). Image based fog detection and visibility estimation for driving assistance systems. In Proceedings, 2013 IEEE 9th International Conference on Intelligent Computer Communication and Processing 2013, pp. 163–168, https://doi.org/10.1109/ICCP.2013.6646102
Negru, M., Nedevschi, S., & Peter, R. I. (2015). Exponential contrast restoration in fog conditions for driving assistance. IEEE Transactions on Intelligent Transportation Systems, 16(4), 2257–2268. https://doi.org/10.1109/TITS.2015.2405013
Halmaoui, H., Joulan, K., Hautière, N., Cord, A., & Brémond, R. (2015). Quantitative model of the driver’s reaction time during daytime fog-application to a head up display-based advanced driver assistance system. IET Intelligent Transport Systems, 9(4), 375–381. https://doi.org/10.1049/iet-its.2014.0101
Yuan, H., Liu, C., Guo, Z., & Sun, Z. (2017). A region-wised medium transmission based image dehazing method. IEEE Access, 5(c), 1735–1742. https://doi.org/10.1109/ACCESS.2017.2660302
Anandkumar, R., Dinesh, K., Obaid, A. J., Malik, P., Sharma, R., Dumka, A., Singh, R., Khatak, S., & Securing e-Health application of cloud computing using hyperchaotic image encryption framework. (2022). 107860, ISSN 0045-7906. Computers & Electrical Engineering, 100. https://doi.org/10.1016/j.compeleceng.2022.107860
Sharma, R., Xin, Q., Siarry, P., & Hong, W.-C. (2022). Guest editorial: Deep learning-based intelligent communication systems: Using big data analytics. IET Communications. https://doi.org/10.1049/cmu2.12374
Sharma, R., & Arya, R. (2022). UAV based long range environment monitoring system with Industry 5.0 perspectives for smart city infrastructure, 108066, ISSN 0360-8352. Computers & Industrial Engineering, 168. https://doi.org/10.1016/j.cie.2022.108066
Rai, M., Maity, T., Sharma, R., et al. (2022). Early detection of foot ulceration in type II diabetic patient using registration method in infrared images and descriptive comparison with deep learning methods. The Journal of Supercomputing. https://doi.org/10.1007/s11227-022- 04380-z
Sharma, R., Gupta, D., Maseleno, A., & Peng, S.-L. (2022). Introduction to the special issue on big data analytics with internet of things-oriented infrastructures for future smart cities. Expert Systems, 39, e12969. https://doi.org/10.1111/exsy.12969
Sharma, R., Gavalas, D., & Peng, S.-L. (2022). Smart and future applications of Internet of Multimedia Things (IoMT) using big data analytics. Sensors, 22, 4146. https://doi.org/10.3390/s22114146
Sharma, R., & Arya, R. (2022). Security threats and measures in the internet of things for smart city infrastructure: A state of art. Transactions on Emerging Telecommunications Technologies, e4571. https://doi.org/10.1002/ett.4571
Zheng, J., Wu, Z., Sharma, R., & Lv, H. (2022). Adaptive decision model of product team organization pattern for extracting new energy from agricultural waste, 102352, ISSN 2213-1388. Sustainable Energy Technologies and Assessments, 53(Part A). https://doi.org/10.1016/j.seta.2022.102352
Mou, J., Gao, K., Duan, P., Li, J., Garg, A., & Sharma, R. (2022). A machine learning approach for energy-efficient intelligent transportation scheduling problem in a real-world dynamic circumstances. IEEE Transactions on Intelligent Transportation Systems. https://doi.org/10.1109/TITS.2022.3183215
Priyadarshini, I., Sharma, R., Bhatt, D., et al. (2022). Human activity recognition in cyber-physical systems using optimized machine learning techniques. Cluster Computing. https://doi.org/10.1007/s10586-022-03662-8
Hussain, F., & Jeong, J. (2016). Visibility enhancement of scene images degraded by foggy weather conditions with deep neural networks. Journal of Sensors, 2016. https://doi.org/10.1155/2016/3894832
Hu, A., Xie, Z., Xu, Y., Xie, M., Wu, L., & Qiu, Q. (2020). Unsupervised haze removal for high-resolution optical remote-sensing images based on improved generative adversarial networks. Remote Sensing, 12(24), 1–20. https://doi.org/10.3390/rs12244162
Ha, E., Shin, J., & Paik, J. (2020). Gated dehazing network via least square adversarial learning. Sensors (Switzerland), 20(21), 1–15. https://doi.org/10.3390/s20216311
Chen, J., Wu, C., Chen, H., & Cheng, P. (2020). Unsupervised dark-channel attention-guided cyclegan for single-image dehazing. Sensors (Switzerland), 20(21), 1–15. https://doi.org/10.3390/s20216000
Ngo, D., Lee, S., Lee, G. D., & Kang, B. (2020). Single-image visibility restoration: A machine learning approach and its 4K-capable hardware accelerator. Sensors (Switzerland), 20(20), 1–27. https://doi.org/10.3390/s20205795
Feng, M., Yu, T., Jing, M., & Yang, G. (2020). Learning a convolutional autoencoder for nighttime image dehazing. Information, 11(9), 1–13. https://doi.org/10.3390/info11090424
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Kaur, N., Sharma, K., Jain, A. (2023). Techniques to Identify Image Objects Under Adverse Environmental Conditions: A Systematic Literature Review. In: Sharma, R., Jeon, G., Zhang, Y. (eds) Data Analytics for Internet of Things Infrastructure. Internet of Things. Springer, Cham. https://doi.org/10.1007/978-3-031-33808-3_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-33808-3_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-33807-6
Online ISBN: 978-3-031-33808-3
eBook Packages: Computer ScienceComputer Science (R0)