Abstract
The speedy and reliable classification of plant disease/pest is essential to preventing productivity loss and loss or diminished quantity of agricultural commodities. Machine learning methodology can be used to obtain the solution. Deep learning has achieved significant advancement in the development of image processing in modern years, greatly outperforming previous approaches. Researchers are very interested in understanding how to apply deep learning to swot plant and pests detection. Deep learning, which is extremely popular in image processing, has offered many innovative precision farming applications in recent decades. In this investigation, deep learning models are adapted to the task at hand using transfer learning and deep feature extraction approaches. The given work takes into account the used pre-trained deep models for feature extraction and fine-tuning RCNN (Region with Convolution Neural Network) and YOLO (You Only Look Once) are used to classify the features extracted by deep feature extraction. Improvised YOLO is used which has proven pest prediction of about 95%. The performance of current research is compared, and common datasets are introduced. This paper examines potential obstacles in real-world applications of deep learning-based plant disease and pest detection. Data from genuine infection and pest pictures is used in the investigations. For performance evaluation, the accuracy is computed and compared.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Human civilization now has the potential to generate enough food to feed more than 7 billion people thanks to modern innovations. Nevertheless, a numeral of factors such as climate change [1], pollinator reject, and plant diseases [2] continue to pose a threat to food security. Pests and pathogens are a worldwide danger to foodstuff security, but they might also be destructive for smallholder farmers whose livelihoods are dependent on healthy crops. Small-scale farmers contribute more than 80% of agricultural production in the on the rise countries and reports of yield losses are not less than 60% due to pests and infections are widespread [3].
Workers do passive observing in many circumstances as they go about their on a daily basis tasks. The downside to this approach is that by the time the plague is recognized, a substantial proportion of harm has already been done. In big farms, early pest detection required a more organized methodology. Traps are, without a doubt, the most commonly used technique for pest monitoring [4, 5]. The great majority of research in the literature is concerned with the second stage. The first step is generally only handled straightforwardly: an explanation of how the data was obtained is frequently included. The third phases are largely outside the purview of research.
2 Literature Review
Gutierrez et al. [6], who conducted a comparative analysis using a combination of pre-trained deep learning model as a mixture of models implemented with machine learning and computer vision, stimulated the current study. The main goal of the [6] study is to improve pest identification accuracy by using current frameworks like TensorFlow and Keras to construct a deep convolutional neural network (CNN). In addition, several recent pre-trained models may be applied to the dataset to assess accuracy. Table 1 depicts the overall survey of pest management and algorithms used and their accuracy.
2.1 Pest Detection Methods
The goal of uncovering methods is to separate a confident target bug from the rest of the scene in a picture. This corresponds to a dual classification using the classifications “target visible” and “target missing.” K-means clustering is a vector quantization approach for grouping a set of comments into k-clusters or k-classes. The image is first divided into 100 × 100 blocks by the algorithm. The RGB and L*a*b* color spaces are then utilized as the foundation for an algorithm that pre-selects probable cluster centers before using K-means clustering to categorize each pixel. Using ellipse eccentricity rules, erroneous objects are removed.
2.2 Pest Classification Methods
The difficulty of classifying pests is significant since a classification like this must not only distinguish among the embattled species but also contract with nontargeted species, which might be many. The closest coldness between the retrieved attribute vector and the reference vectors associated with each class was used to classify each item as a whitefly, aphid, or thrip. Xia[16] utilized the watershed method to partition the insects, then used the Mahalanobis distance to extract color characteristics from the YCrCbcolour space. For the classification of eight pest species. Dawei [8] used transfer learning to classify 10 species in pictures collected in the field using a pre-trained AlexNet CNN. Metwalli [19] present the DenseFood model, which is a densely linked CNN model with several convolutional layers. The phrase “You Only Look Once” is abbreviated as YOLO.
To identify objects, the technique just takes a single forward propagation through a neural network, as the name indicates. This indicates that a single algorithm run is used to forecast the whole picture. The CNN is used to anticipate multiple bounding boxes and class probabilities at the same time. There are several variations of the YOLO algorithm. Tiny YOLO and YOLOv3 are two popular examples.
3 YOLO V3 Architecture
YOLO because of its velocity and accuracy, this algorithm is very fashionable. YOLOv3's network design is made up of three distinct networks. The first is Darknet-53, which serves as the network's backbone. The detecting layers, also known as YOLO-layers, come next, followed by an upsampling network. Figure 2 depicts the network structure. The backbone network, Darknet-53, is utilized to extract features from the input picture. The basic components of Darknet-53 are residual blocks and 53 convolutional layers. A residual block is made up of two 3 × 3 and 1 × 1 convolutional layers linked together via a shortcut connection. Figure 3 depicts the Darknet-53 architecture in its entirety.
Figure 1 shows the overview of YOLOv3 structure. The numbers below each layer show the dimension decrease of the input at that layer. The gray layer is the input layer. The blue layers are part of the backbone network, Darknet-53. The red layers are upsampling layers and the yellow layers are YOLO-layers.
4 Improvised YOLO V3 Architecture
The model divides the images into an S X S grid and for each grid cell predicts B bounding boxes, confidence (C) for those boxes and class probabilities(CP). The predictions are encoded as an S X S X (B* C + CP) Tensors. Dataset: Identifying a species from a photograph is a difficult task. The categorization of a picture is based on the assumption that the image contains just one species. However, in general, we want to identify ALL of the species in a photograph. Thankfully, biologists and taxonomists have created a taxonomic hierarchy to classify and organize species. Insects, spiders, crustaceans, centipedes, millipedes, and other arthropods are included in the ArTaxOr data set. Figure 2 depicts the overall working of improvised YOLO.
The dataset consists of images of arthropods in jpeg format Araneae (spiders), adults, juveniles, Coleoptera (beetles), adults, Diptera (true flies, including mosquitoes, midges, crane file, etc.), adults, Hemiptera (true bugs, including aphids, cicadas, planthoppers, shield bugs, etc.), adults and nymphs, Hymenoptera (ants, bees, wasps), adults, Lepidoptera (butterflies, moths), adults, Odonata (dragonflies, damselflies), adults, Orthoptera (grasshoppers, locusts, crickets, etc.)
Figure 3 predicts the pest in the picture. Accuracy of the pest prediction is also marked in the image. Ground truth image and predicted image are specified for comparing the accuracy of the prediction. The model fails to predict the class is also projected in the figure. Table 2 shows the accuracy comparison of each class.
6 Conclusion
It is difficult to automate pest monitoring. Machine learning algorithms have evolved to the point where the apparatus desirable to develop a precise system with real-world application is now readily obtainable. Congregation data that is reflective of the enormous variety observed in live-out is difficult, more common, and procedures to permit consumer research get more refined, this may become less of an issue in the vicinity of future. but, as mentioned all through this paper, there are unmoving numerous explore gaps to be filled, implying that pest monitor mechanization will remain a fascinating study topic for numerous years. As proposed YOLO V3 architecture shows around 95% of accuracy in different pest predictions. Comparatively YOLO v3 works better and provide good result than RCNN. Adding more images for training will help to reduce the failure cases. In case if we have less images we recommend to use image argumentation for better and more accuracy.
References
Tai AP, Martin MV, Heald CL (2014) Threat to future global food security from climate change and ozone air pollution. Nat Clim Chang 4:817–821. https://doi.org/10.1038/nclimate2317
Strange RN, Scott PR (2005) Plant disease: a threat to global food security. Phytopathology 43:83–116. https://doi.org/10.1146/annurev.phyto.43.113004.133839
Harvey CA, Rakotobe ZL, Rao NS, Dave R, Razafimahatratra H, Rabarijohn RH et al (2014) Extreme vulnerability of smallholder farmers to agricultural risks and climate change in madagascar. Philos Trans R Soc Lond B Biol Sci 369:20130089. https://doi.org/10.1098/rstb.2013.008
Barbedo JGA, Castro GB (2019) Influence of image quality on the identification of psyllids using convolutional neural networks. Biosyst Eng 182:151–158
Yen AL, Madge DG, Berry NA, Yen JDL (2013) Evaluating the effectiveness of five sampling methods for detection of the tomato potato psyllid, Bactericeracockerelli (Sulc) (Hemiptera: Psylloidea: Triozidae). Aust J Entomol 52:168–174
Gutierrez A, Ansuategi A, Susperregi L, Tub´ıo C, Ranki´c I, Lenˇza L (2019) A benchmarking of learning strategies for pest detection and identification on tomato plants for autonomous scouting robots using internal databases. J Sens
Espinoza K, Valera DL, Torres JA, López A, Molina-Aiz FD (2016) Combination of image processing and artificial neural networks as a novel approach for the identification of Bemisiatabaci and Frankliniella occidentalis on sticky traps in greenhouse agriculture. Comput Electron Agric 127:495–505
Dawei W, Limiao D, Jiangong N, Jiyue G, Hongfei Z, Zhongzhi H (2019) Recognition pest by image-based transfer learning. J Sci Food Agric 99:4524–4531
Deng L, Wang Y, Han Z, Yu R (2018) Research on insect pest image detection and recognition based on bio-inspired methods. Biosyst Eng 169:139–148
Dimililer K, Zarrouk S (2017) ICSPI: intelligent classification system of pest insects based on image processing and neural arbitration. Appl Eng Agric 33:453–460
Liu L, Wang R, Xie C, Yang P, Wang F, Sudirman S, Liu W (2019) PestNet: an end-to-end deep learning approach for large-scale multi-class pest detection and classification. IEEE Access 7:45301–45312
Wang F, Wang R, Xie C, Yang P, Liu L (2020) Fusing multi-scale context-aware information representation for automatic in-field pest detection and recognition. Comput Electron Agric 169:105222
Wang Z, Wang K, Liu Z, Wang X, Pan S (2018) A Cognitive Vision Method for Insect Pest Image Segmentation. IFAC Pap Online 51:85–89
Batool A, Hyder SB, Rahim A, Waheed N, Asghar MA et al (2020) Classification and identification of tomato leaf disease using deep neural network. 2020 international conference on engineering and emerging technologies (ICEET), IEEE, pp 1–6
Yao Q, Lv J, Liu QJ, Diao GQ, Yang BJ, Chen HM, Tang J (2012) An insect imaging system to automate rice light-trap pest identification. J Integr Agric 11:978–985
Xia D, Chen P, Wang B, Zhang J, Xie C (2018) Insect detection and classification based on an improved convolutional neural network. Sensors 18:4169
Ebrahimi M, Khoshtaghaza M, Minaei S, Jamshidi B (2017) Vision-based pest detection based on SVM classification method. Comput Electron Agric 137:52–58
Deng L, Wang Y, Han Z, Yu R (2018) Research on insect pest image detection and recognition based on bio-inspired methods. BioSyst Eng (Elsevier) 169:139–148
Metwalli, Shen W, Wu CQ (2020) Food image recognition based on densely connected convolutional neural networks. 2020 international conference on artificial intelligence in information and communication (ICAIIC), pp 027–032
Kumar S, Yadav D, Gupta H, Verma OP, Ansari IA, Ahn CW (2021) A Novel YOLOv3 AlgorithmBased deep learning approach for waste segregation: towards smart waste management. Electronics 10:14. https://doi.org/10.3390/electronics10010014
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Sujaritha, M., Kavitha, M., Roobini, S. (2023). Pest Detection Using Improvised YOLO Architecture. In: Kannan, R.J., Thampi, S.M., Wang, SH. (eds) Computer Vision and Machine Intelligence Paradigms for SDGs. Lecture Notes in Electrical Engineering, vol 967. Springer, Singapore. https://doi.org/10.1007/978-981-19-7169-3_6
Download citation
DOI: https://doi.org/10.1007/978-981-19-7169-3_6
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-7168-6
Online ISBN: 978-981-19-7169-3
eBook Packages: Computer ScienceComputer Science (R0)