Abstract
A review of current advanced classification and recognition techniques, as well as recent technological advances in machine vision, shows that the application of a cumulative neural network (the most advanced machine learning algorithm in) for the waste management is an area of research that remains unexplored. The neural network used to prove the notion is referred to as a proof-of-concept neural network. A low-cost method for identifying and classifying recyclables, increase sorting efficiency, lower human workload, and boost to better comprehend the waste data network revolution how complexes of neural networks may change the industry of waste management. Using only color images of input waste, the system was able to classify objects with an accuracy of up to 90° by type of material (paper, glass, cardboard, metal, and plastic). The potential implementation of the recycling algorithm was assessed in terms of economic, social, commercial, and environmental performance, under the concept of integrated and sustainable waste management. When CNN-based systems are compared with existing waste management technologies, it has been found that they have the potential to modify extensive, semi-reversible manufacturer liability programs, and can change. The economy underpins all recycling.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
- Garbage detection
- Machine learning
- Image processing
- Waste
- Waste sorting
- Analyzing waste
- Segregation of waste
- CNN
- Machine learning
1 Introduction
Garbage management is a big problem in India that impacts a huge number of people in a variety of ways. With over 1.50 lakh metric ton (MT) of solid waste produced every day, India is buried behind a mountain of garbage. Worse, garbage accounts for almost 90% of the total amount (135,000 MT each day). Almost, 15,000 MT of rubbish is exposed every day, resulting in roughly 55 lakh MT of solid waste thrown in open locations each year, resulting in “serious” pollution. Only, 20% of all garbage collected is processed (27,000 MT per day), with the remaining 80% (108,000 MT per day) sent to landfills [1].
Researchers have offered litter reduction strategies in a number of ways. Despite this, the importance of actual behavioral intervention in litter reduction has received little research attention [2]. Based on a review of 50 studies, the five major approaches (environmental design, prompts, clean up the previous litter or beautification, environmental education, and environmental engagement) were found to have an influence on a person’s littering habit [3].
2 Related Work
The study looks at existing state-of-the-art identification and sorting approaches as well as recent advancements in machine vision technology. The research includes traditional recycling systems so that the machine vision techniques could be compared to them [4]. While the study’s goal was to cover techniques that could be applied to a variety of waste streams, it acknowledged that there was a tendency toward municipal solid waste owing to the greater number of research articles published in this sector. However, because solid waste is by far the most complicated and diversified waste stream, any waste development that makes categorization easier would necessarily apply to waste streams. Industrial trash, electronic waste, construction waste, and demolition debris are examples of other waste.
2.1 Manual Sorting
Manual labor is more cost-effective in underdeveloped nations than automated methods. The aggregate recovery efficiency of manual sorting is 91%. Considerably, more than any known automated approach, due to humans’ unique ability to recognize materials in a relatively short amount of time. Manual sorting is usually limited to a last quality control step in industrialized nations. Because of budgetary restrictions, human weariness, and exposure to dangerous chemicals, it is typically avoided if feasible [5].
2.2 Machine Vision System
All machine vision systems, as illustrated in the graphic, image acquisition is handled by sensors, while image processing is handled by algorithms. For image processing, a device that removes the things that the vision system identified and found is added to the system for trash sorting [6].
2.3 Aerial Survey
Aerial surveys were done in Cambodia in October 2019 utilizing a DJI 4 Phantom Pro photography UAS with a 20 MP RGB (red, green, and blue color scheme) picture sensor over PhnomPenh, Sihanoukville, and Siem Reap (Fig. 1). Plastic trash is seen drifting, trapped in plants, washed ashore on beaches, and gathered to form plastic river carpets in Fig. 1b–d. Images were captured at a resolution of 4864 × 3648 pixels, with ISO settings ranging from 100 to 400 pixels, and shutter speed and aperture set to automatic. The image sensor’s nadir viewing angle was 0 at a flying height of 6 m and a vertical GPS hover accuracy of 0.5 m.
The flight height was determined after analyzing imagery from pre-flight testing at altitudes ranging from 3 to 60 m. It provided adequate wide area coverage with sufficient object resolution (length 2.5 cm). A Topcon GR-5 global navigation satellite receiver system was used to identify the ground control locations. The points were used to improve image mosaicking and geo-referencing. The collected aerial photos were post-processed with Pix4Dmapper version 4.5.3. The project included automated point cloud densification, 3D mesh generation, digital surface modeling, orthomosaic, and digital terrain modeling. In certain pictures, a visual meter scale was used to supplement size estimations in photographed settings. The photos were not subjected to any atmospheric adjustment.
2.4 Detection and Quantification of Plastic Litter Algorithms
True color RGB photos were split into tiles of 100 × 100 × 3 pixels and 50 × 50 × 3 pixels throughout the theatrical survey. The tile size was determined based on the findings of a prior study (Martin et al. 2018). The plastic litter detector (PLD-CNN) technique is used to analyze the 100 × 100 × 3 pixel tiles. It was taught how to distinguish between (i) water, (ii) vegetation, (iii) litter-low, (iv) litter-high, (v) 11sand, and (vi) other tile targets (Fig. 2). Tiles with less than three objects were labeled litter-low, and those with three or more were labeled litter-high. After seeing that litter objects in images ranged from almost none in rivers with natural surroundings to hundreds of litter objects in plastic river carpets, we devised these parameters to ensure that the algorithm could count as many litter items as possible. From the 100 × 100 × 3 pixel tiles used by PLD-CNN, a division of 50 × 50 × 3 pixel tiles was chosen for the plastic litter quantifier method. The purpose of using a divider was to enhance the algorithm and therefore decrease information loss. PLQ-CNN only evaluated tiles with any amount of garbage. PLQ-CNN was also taught to recognize and count individual litter bits inside these tiles. In addition to the six PLD-CNN categories, PLQ-CNN created 18 classes, including cans, cartons, plastic bags, bottles, cups, canisters, polystyrene packaging, shoes, Styro-foam, strings, and fabrics (Fig. 2). Plastic bags were used to package sweet wrappers, noodle packages, and chips bags, which were divided into large and small bags. The items were recognized using the new international marine litter classification technique developed by the US National Oceanic and Atmospheric Administration. Following the theatrical survey, a visual assessment of these items or plastic objects was conducted as part of the cleanup procedure [7].
Also, laser-induced breakdown spectroscopy (LIBS), optical based sensors, X-ray sensors, ultrasound and noise and electromagnetic sensor (EMS) are used for detection.
3 Proposed Method
3.1 Object Detection
We utilized the TensorFlow API to build the algorithm, as well as Jupyter Notebook to interact with it and an Amazon Web services Ubuntu server to train it. Using TensorFlow’s pretrained model and the COCO dataset, the team was able to lay the groundwork for the project’s object detection component. As seen in the above image, Prototype 1’s resultant technique boxed almost every item it identified inside an input image. Each recognized object is surrounded by a box. The confidence indication next to each identified object is a percentage that indicates how accurate that detection was. For example, the algorithm is 76% positive that it recognized a kite in the image provided. A grading system is also in existence, albeit it is not standardized. The number of items spotted in each image in Prototype 1 was scored from one to four. This functionality paved the way for the eventual inclusion of image ranking based on litter severity in Prototype 2. For the initial prototype, the image’s final output boxed the elements with the accuracy percent and a rating (Fig. 3).
3.2 Dataset
We realized that the algorithm’s litter identification skills required a large dataset dedicated to litter after hours of research. Pictures from Google Streets were used to build the dataset that the team eventually came up with. Pulling a string with your hands, we created a script to automate the Google Streets image extraction procedure. Hundreds of thousands of photographs would be collected automatically. The crew needed to identify the pictures after they were acquired in order to start the training process. As a consequence, 10,000 pictures from Google Streets were used to build the dataset. Keep Earth Beautiful organized a photo identification contest for a group of children. This group utilized the labeling software that we provided. The students initially classified the photographs into two categories: those with litter and those without litter. They then identified all of the pictures using garbage. Once the students had done labeling, we needed to double-check the labels for accuracy. During the verification process, the team noticed that some of the recognized images were incorrectly labeled, rendering them worthless. Approximately, 3000 images were considered appropriate for the dataset at the conclusion of the operation. Although not all of the photographs were recognized, the 3000 that was proved to be extremely useful, given the total of 9000 items of garbage.
3.3 Training
The goal of Prototype 2 was to finish the garbage recognition part of the algorithm. However, the team would have to train the labeled dataset to match their criteria before deploying the recognition. The researchers split the dataset in half, utilizing 80% of the images for training and the remaining 20% for testing. After training the algorithm to a certain number of “steps,” we used a checkpoint file to evaluate the program’s litter detection accuracy. These checkpoint files informed the team when it was time to stop training and evaluate the algorithm’s progress. If the team believes the algorithm has been over trained, the previously stored checkpoint file will be used (Figs. 4, 5, and 6).
3.4 Litter Detection
After the algorithm had been trained, we put it to the test using a variety of images. We also utilized images from the collection that did not contain garbage. The organization is in desperate need of help. To guarantee that the software does not box them even if there are no other things in the image. Litter may be seen in the image. After hours of testing and training, the crew was finally satisfied with the result. Only, box pieces of litter with a height of over a meter appear in the algorithm’s final output. There is a 50% confidence level for the image, as well as a rating (Fig. 7).
5 Conclusion and Future Scope
5.1 Conclusion
Most regions have a significant demand for cleanliness when it comes to trash detection. This new innovative method of the trash-detecting module paired with iterative approaches would assist the vast majority of establishments that currently rely on traditional cleaning processes. The results of this research will be utilized to establish priorities and ensure that all places are kept as clean as possible.
5.2 Future Scope
More study should be conducted, this time concentrating on other types of enterprises and sectors. To put it another way, more study in a range of empirical circumstances will be needed to strengthen the validity of the concept.
References
Qing L, Yao C (2003) Real-time concepts for embedded systems; Fa-deng G, Min-xian H (2010) Study on the detection of locomotive driver fatigue based on image. In: 2010 2nd international conference on computer engineering and technology. IEEE
Pattanashetty VB, Mane V, Iyer NC, Kore S (2022) Traffic rules violation detection system. In: Joshi A, Mahmud M, Ragel RG, Thakur NV (eds) Information and communication technology for competitive strategies (ICTCS 2020). Lecture notes in networks and systems, vol 191. Springer, Singapore. https://doi.org/10.1007/978-981-16-0739-4_8
Pattanashetty VB, Mane V, Hurkadli SS, Iyer NC, Kore S (2022) Lane detection for visual assistance of self-driving vehicles for structured and unstructured roads. In: Joshi A, Mahmud M, Ragel RG, Thakur NV (eds) Information and communication technology for competitive strategies (ICTCS 2020). Lecture notes in networks and systems, vol 191. Springer, Singapore. https://doi.org/10.1007/978-981-16-0739-4_27
Pattanashetty VB, Iyer NC, Viswanath HL, Kore S (2019) Inclusive device to connect people with the surrounding world. In: Fong S, Akashe S, Mahalle P (eds) Information and communication technology for competitive strategies. Lecture notes in networks and systems, vol 40. Springer, Singapore. https://doi.org/10.1007/978-981-13-0586-3_49
Pattanashetty VB, Iyer NC, Viswanath HL (2018) Intellectual conveyance structure for travellers. In: Saeed K, Chaki N, Pati B, Bakshi S, Mohapatra D (eds) Progress in advanced computing and intelligent engineering. Advances in intelligent systems and computing, vol 564. Springer, Singapore. https://doi.org/10.1007/978-981-10-6875-1_37
DevKit8000 Evaluation Kit manual 51. http://dlib.net/facelandmarkdetectionex.cpp.htm
Pattanashetty VB, Iyer NC, Ibal P, Bagewadi A (2015) Smart driving assistance. IEEE Int Transp Electrification Conf (ITEC) 2015:1–4. https://doi.org/10.1109/ITEC-India.2015.7386870
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Pattanashetty, V., Bhudhihal, S., Shamshuddin, K., Kore, S., Hiremath, S. (2023). Cumulative Neural Network Classification and Recognition Technique for Detecting and Tracking of Small-Size Garbage Materials. In: Joshi, A., Mahmud, M., Ragel, R.G. (eds) Information and Communication Technology for Competitive Strategies (ICTCS 2021). Lecture Notes in Networks and Systems, vol 400. Springer, Singapore. https://doi.org/10.1007/978-981-19-0095-2_14
Download citation
DOI: https://doi.org/10.1007/978-981-19-0095-2_14
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-0094-5
Online ISBN: 978-981-19-0095-2
eBook Packages: EngineeringEngineering (R0)