Keywords

Introduction

In past times, increasing agricultural productivity has received a lot of attention. The production forecasts are unsustainable since the world population is increasing uncontrollably, while agricultural lands are decreasing dramatically [1]. By 2050, worldwide grain yields would have to treble to fulfill the needs of ten billion people [2,3,4]. Smart agriculture is necessary to boost agricultural output due to the challenges of expanding the total cultivated area [5]. Smart agriculture, also known as precision agriculture, is a fascinating idea that uses cutting-edge technologies such as remotely sensed, geographic information systems, and telecommunications, including powerful computational mechanisms [6] to monitor, analyze, as well as respond to agricultural variations.

Raising agricultural output has garnered considerable attention. The output predictions are unachievable since the global population is uncontrollably expanding whereas agricultural lands are rapidly diminishing. To meet the demands of 10 billion people by 2050, global grain production is to be tripled. Intelligent farming is required to increase farm productivity to overcome the difficulties of growing total cropped area. Sustainable farming, also known as precision agriculture, is an intriguing concept that employs cutting-edge innovations like aerial imagery, geographic information systems, mobile communications, and computationally intensive methods to supervise, analyze, and respond to farmland differences.

Related Works

Deep learning, which takes advantage of the superpowers of elevated computer networks, has recently emerged as a promising method for solving challenges concerning image categorization reliability. Deep learning in neural networks, often known as deep neural networks, has received awards in classification techniques and machine learning competitions [7,8,9]. DNNs feature several computing stages with sophisticated systems to mimic sophisticated as well as imperfect actual information, resulting in a high computational burden [10]. As a result, HPC platforms must integrate DNNs for them to be a strong and valuable technology. DNNs have already been utilized in PA only seldom due to the stringent hardware requirements.

In light of this pressing need, the focus is to research using modern methods like DNNs and UAVs to analyze the condition of Indian paddy fields [11]. The overall thickness of paddy fields, comprising sparse and standard concentration, is used to determine the grade. It is worth noting that paddy concentration is a key component in determining whether or not a harvest will be effective [12, 13]. Researchers substantial percentage images of the paddy fields from low elevations using unmanned aerial vehicles (UAVs). Next, to examine the performance of the paddy fields, DNNs are utilized to categorize the image. Testing is carried out using high-performance computing (HPC) systems comprised of numerous powerful computers. They chose 10-day-old rice paddies in Deltaregions as a research study for an image collection.

Proposed Method

Figure 15.1 depicts the theoretical method for analyzing the sustainability of rice crops using sophisticated approaches. Deep neural networks are used to identify the remaining images after learning the collected data. They give producers an important approach that is adopted in the categorization findings of deep learning-based processors. Owing to the increased computational burden of deep learning techniques, an excellent performance computer machine is used to train the deep learning-based classification.

Fig. 15.1
A flow diagram depicts the method for analyzing the paddy fields using deep learning-based classifiers and a high performance computing system.

Work flow of the proposed system

Results and Discussion

The photos must also include the entire agricultural area, which is often tiny and irregular in shape in India. Aerial images are undesirable, as per the rigorous criteria because the grade is highly dependent on weather, clouds condition, poor sharpness, and low processing. Furthermore, aerial images are extremely costly and they can be obtained at any moment. Drones/UAVs are employed in this investigation to collect images from moderate and extremely low altitudes. Drone/UAV photos obtained from a low altitude have various advantages, including minimal price, image quality, high quality/bit-rate, noise avoidance, and so on. Furthermore, users could quickly swap out camera angles varieties to collect numerous groups in response to a wide range of goals.

Image enhancement is a crucial component that determines the performance of any algorithm. Some elements in the external environment, including wind, sunshine, weather, etc. have effects on the effectiveness of photos gathered by UAVs in paddy fields. As a result, the images are thoroughly edited to increase their clarity. Figure 15.2 depicts the four components of image preparation.

Fig. 15.2
5 images depict the method of image preprocessing by contrast adjustment, segmentation, conversion to grayscale, and filtration.

Image preprocessing

A CNN often has multiple stages, which are divided into two categories: Convolutional layers and pooling layers. The huge structure is formed by the alternation of two types of layers. The final tiers, like in typical ANNs, are completely linked tiers with complete links to the prior levels. Figure 15.3 shows a CNN instance, notably LeNet-5. Convolution and image rectification, which corresponds to convolution operation and average pooling, accordingly, are two main phrases that describe the intriguing concept of CNNs. Deep learning in general, including CNN's in specific, have proven its worth by winning a slew of image categorization tournaments. This is the primary rationale for deciding to use this strategy to solve the image categorization problem.

Fig. 15.3
An architecture of LeNet depicts the input image, feature extraction, which includes the convolution and pooling layers, and the output.

A typical architecture of LeNet

The footage for this investigation is taken in paddy fields in the Delta regions of southern India. The objective is to rate the performance of 10-day-old paddy fields. The images are taken in two rice paddy fields with a total size of around 2 hectares. The total number of photos obtained is around 800. The authors choose 200 heavily processed photos and categorize them into two classifications: sparse concentration and normal concentration, depending on the skills and experiences of farming professionals. They used 160 photos for learning and 40 images for evaluating CNNs in this dataset.

The results obtained from the deep learning-based classifiers are shown in Table 15.1. Because their quantities are around 0.75, the statistical indicators are very appropriate. As previously stated, the effectiveness of CNNs is determined by their configuration settings. Because DNNs are time-consuming, the trial-and-error technique is not perfect. DNNs require roughly 48 h to converge in one configuration. As a result, several optimization strategies may be used in future research to determine the ideal DNN configuration.

Table 15.1 Performance measures

Furthermore, the production of paddy fields is examined using the same paradigm in research consideration. Images of 5-day-to-harvest paddy fields are analyzed to determine production, as shown in Fig. 15.4. The image depicts three distinct sorts of paddy fields: normalcy, wind-caused slumping, and illness infections Fig. 15.4a–c.

Fig. 15.4
3 photographs depict the normalcy, wind caused slumping, and illness infections of the paddy fields, respectively.

Sample performance measures over rice-field productivity

Conclusions

To analyze the condition of Indian agricultural areas, this research developed a unique paradigm related to modern methods such as deep learning and surveillance drones. Researchers utilize a database gathered from 10-day-old paddy fields in Deltaregions to test the effectiveness of platforms. The interventions addressed are used to rate the performance of paddy fields using the concentration criteria. The experimental findings show that it is fairly suitable for Indian food practice as the accuracy of the system is around 0.72. The effectiveness of the approach, nevertheless, may be enhanced. Heavily processed photos are relatively huge, and they may include an amount of fuss. DNNs could be acquired more quickly by compressing the relevant details in these photos. Additionally, DNNs’ training period is greatly shortened with relatively small photos. In addition, to enhance item edges, standard image processing methods can be implemented.