Abstract
With shrinking natural resources and the climate challenges, it is foreseen that there will be an imminent stress in agricultural outputs. Deep learning provides immense possibilities in allowing computational models to learn representation of data generated for precise application of agricultural inputs and the smart management of outputs. This will go a long way in addressing the global food security concerns.
Present study demonstrates the discriminative and predictive power of state-of-the-art deep learning approaches that have been successfully applied to the various facets of engineering in agriculture; ranging from estimation of soil moisture, water stress determination, disease detection, weed identification, agro-produce quality evaluation and more. Realization of these approaches will preclude human judgment by the underlying iterations of deep learning framework resulting in an increased precision and universality. Broader acceptance and applicability of deep learning would require inclusion of ground-truth datasets and should feature integration of mechanisms for fusion of data from multiple provenances; thus making the models robust and field worthy.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
By the middle of this century, we will be roughly 9.7 billion [1], this will include 1 billion of us who shall be chronically under-nourished and would suffer from multiple nutritional deficiencies [2]. Generous estimates predict a 60 per cent increase in the global food production (based on 2005–2007 levels) has to be achieved at all cost for negotiating this monumental challenge [3]. Besides the challenge of attaining this production level, a key consideration has to be the equitability of access of the food produced. Thus, direct and indirect cost of production of the food has to remain plateaued. This in turn can be attained by optimizing the inputs in terms of water requirement; fertilizers, insecticides, pesticides, weedicides, etc.; controlling the postharvest losses, restoring the quality of produce during storage, maintaining cold chain, etc.
There are challenges to be meted out while we look for this increased agricultural production figures. Coming decades shall witness scenes of severe water scarcity as agricultural water use rises from 3220 to 5152 km3 by 2050 [4]. Similarly, there will be a marked decrease in arable land as soil erosion will take away 33 billion tons of arable land and fertilizer usage will rise from the present 190 million tons to a new high of 223 million tons [5]. This increase in fertilizer application shall contribute to pollution of water by increasing its nitrogen and phosphorus content to 150 and 130 per cent, respectively [6]. Water requirement by the crops needs to be optimized in terms of water stress invoked due to deficiency of the same. Soil moisture can be directly correlated with the water availability for plants; it is therefore widely used as an indicator of water stress [7, 8]. While pressing water stress can reduce productivity, affect produce quality and facilitate onset of disease [9]; moderate water stress has been reported to improve quality of yield of the agro-produce [10]. Regulated deficit irrigation (RDI) can be a potent technological intervention for reducing the staggering amount of water used for irrigation [11].
Providing food security and ensuring sustainability in agricultural production while decreasing the environmental impact of agriculture can be made possible by precision agricultural practices [12, 13]. It is basically data driven, technologically enabled sustainable farm management system which requires deployment of internet-of-things (IoT) [14] based sensors [15] for monitoring crop stress phenotyping [16], assessing nutrient requirement [17], analysing crop growth [18], using unmanned vehicles for computer vision-based weed and disease identification [19]. All this information is compiled by suitable software tools in smart embedded devices for a resilient artificial intelligence (AI)-based decision support system in the agroecosystem [20]. Successful realization of precision agriculture applications shall result in reducing production cost, optimizing labour, energy, space; all this will ultimately lead to enhanced profits from farming.
It is estimated that between 30 and 50 per cent (1.2–2 billion tons) of food produced on the planet is not consumed [21]. The spoils are shared equally by the postharvest losses attributed to quantitative losses due to managerial and logistic issues; and food wastage due to qualitative issues attributed to the biochemical changes within the food matrix. The blue water foot print of this food lost is about 250 km3 annually [22]. Assessment of the external and internal quality of food and agro produce can be cost effectively carried out non-destructively by spectroscopic sensing approaches [23,24,25,26]. Heterogeneity of samples, spectrometers and environment results in a lot of inconsistencies with the spectral data leading to numerous problems during quality evaluation [27]. During feature extraction, the chemometric models should display robustness and possess the inherent capabilities to remain unaffected by detection conditions and biological variabilities of the samples.
Penchant for predictions becomes an obsession for humans when uncertainty prevails over the outcomes. Agriculture is one such set of activities where uncertainty lurks behind every operation, and for operations involving engineering interventions the associated challenges have a substantial monetary baggage as well. Unreliable expertise in judging and foreseeing the unpleasant situations has constantly tickled humans to device tools and methods to scale time and opt for corrective measures to reap a rich harvest sustainably. A tool of relevance for predicting situations and causes in relation to agriculture is deep learning network. It includes a broad category of machine learning techniques wherein features are learnt in a hierarchical fashion. This technique can successfully handle computer vision tasks, which includes image classification, detection and segmentation [28]. Literally it means that the simple modules stacked in numerous layers are all learning and simultaneously computing nonlinear input–output mappings. Each module is capable of transforming the representation of input to increase selectivity and invariance. Multiple nonlinear layers make it possible for a deep learning architecture to implement extremely intricate input functions while being sensitive to all the minute details. This makes it possible for deep learning modules to distinguish, say, between a diseased leaf and a healthy leaf while not taking into consideration the background, orientation, lighting or the surroundings.
Past decade has seen a deluge of sensors and transducers that have been coupled with various electronic gadgetry to record responses of the various vectors causing detrimental effect in agricultural production. The plethora of sensors are generating massive volume of data. Interpretation of this data to decipher valuable information poses a worthy challenge across all disciplines of agricultural engineering. Deep learning network has made extraction of features from complex nonlinear data simpler by using convolutional neural network (CNN) [29], recurrent neural network (RNN) which includes long short term memory (LSTM), bidirectional long short term memory (bi-LSTM) and gated recurrent unit (GRU) [30,31,32]. Some of the other deep learning architecture include deep belief network [33], auto encoders [34] and deep Boltzman machine.
Deep learning can be used to carry out big data analysis for computer vision [35] applications related to plant water stress management [36] and help in formulating the protocols of RDI for efficient water management. Extraction of information from spectral data representing local and global features of agro produce can be effectively carried out by deep learning approaches [37]. Deep learning can handle complex image-based plant phenotyping like, leaf counting [38], disease detection [39], pixel-wise localization of root, shoot and ears [40]. All this information can be combined to support the development of intelligent agricultural machineries [41].
During the past century, agricultural engineers have immensely contributed to several path breaking advancements and developments in agricultural mechanization [42]; these professionals are instrumental in supervising the agrarian community for their sublime transition from machinery operators to machinery supervisors. Thus, enabling them with precision agricultural technologies circumscribing precision in water management, intelligent use of agricultural machinery and smart postharvest management of agricultural produce. This paper embodies amalgamation of deep learning applications to the different facets of agricultural engineering. Literature search revealed that deep learning has been applied to a wide range of issues related to subject of this paper, the selection and rejection was methodologically (Fig. 1) carried out to enthrall the readers with a comprehensive and a totalitarian read that elucidates the application of deep learning algorithms for the engineering interventions in agriculture. Highlighted also in this paper are ways and means for extraction of spatio-temporal features to overcome the limitations of conventional approaches and how deep learning will obliviate the hindrances that have been pulling down the widespread realistic adoption of intelligent, smart, IoT-based engineering applications in agriculture. The paper culminates with putting forward the challenges that contemporary deep learning approaches need to address to enable wider effective application and acceptance.
2 Deep learning versus contemporary chemometrics
Near-InfraRed (NIR) spectrum spreading across 780–2500 nm has been widely used to register the changes in agri-food system before harvest, in terms of plant attributes, stresses, diseases, yield attributes, weed detection [43]; and after harvest, in terms of varietal differences of the produce, food quality, food contamination, etc. [27]. Excellent results have been demonstrated during the estimation of a range of soil properties in the visible and NIR range; this includes soil moisture as well [44]. In fact, there is an entire gambit of precision agriculture outcomes that can be addressed by deep learning techniques (Fig. 2).
Plants undergo various changes in colour and shape followed by various physiological and biochemical changes as a response to attack by pathogens, such attacks often culminate to onset of diseases. Stress induced by disease, water, light or pests have a direct bearing on the transcription factors (abscisic acid, auxin and cytokines) which can be deciphered directly by molecular and serological methods only for high throughput analysis; and indirectly by thermography, fluorescence, spectroscopy or hyperspectral imaging (HSI) and the associated chemometrics. However, susceptibility to ambient environmental conditions and the absence of steady light during imaging restricts the exhaustive use of this technique.
In postharvest agriculture the absorption spectra generally record changes by means of the hydrogen containing groups (e.g. S–H, C–H, N–H, O–H) which are directly related to proximate composition of agro-products in terms of sugar, protein, fat, acid and water contents. Therefore, the spectrum is loaded with information in terms of related bio-molecules and other chemical substances. The underlying principle is explained by Beer-Lambert law, which expresses that a liner relationship exists between the absorbance spectra that entails changes in chemical composition of the substrate. Variations in the basic biochemical matrix of agro produce can be effectively reflected by different linear and nonlinear chemometric methods. Often the linear models fail to register subtle changes in the chemical composition and nonlinear models are always associated with the risk of over-fitting. Acquiral of spectral data can never be bereft of spectral noise. While spectral pre-processing algorithms can handle the noise arising due to biological anomalies of chemical compositions and noise arising due to the changes in environmental conditions; the introduction of noise due to physical state of spectrophotometer drifts the relevance of spectral data far and away.
It has been widely reported that assigning specific features to soil spectra is difficult for it being a heterogeneous complex mixture of materials [45]. Traditional regression models are not suitable to model soil moisture content because of the associated nonlinearity and non-stationarity with parameters which are difficult to measure in the field. The limitations of traditional modelling techniques can be minimized by using soft computing based data driven techniques (machine learning and deep learning) to estimate the components of hydrologic cycle (such as, ET0, runoff and soil moisture) as a function of time and space. The accuracy rendered by these techniques for estimation of ET0 and soil moisture (SM) is more or less in the acceptable range. However, the effective use of these techniques is limited by quality and period of time series data. It is therefore well understood that performance of empirical and ensemble models for prediction of short term daily ET0 is dependent on the choice of model and reliability of input variables. The suitability of these techniques for predicting short term (1–7 days) ET0 for real time irrigation scheduling based on actual water requirement is questionable. There is dire need for such models and techniques which can fill this gap.
Deep learning algorithms focus on learning features progressively from the data at several levels [46, 47]. As deep learning models learn from data, a clear understanding and representation of data are vital for building an intelligent system that can make complex predictions. Proper model selection is also crucial as each architecture has multiple unique features and processes the data in different ways. The deep learning architectures applied to the agricultural engineering domain have been mainly based on Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN) and Auto Encoders (AE) [48, 49]. A succinct description of the above said architecture is as follows:
ANN architecture comprises multiple perceptrons or neurons at each layer and the neurons in different layers are linked by weighted synaptic connections. The architecture of ANN consists of an input layer, one or more hidden layer(s) and an output layer (Fig. 3a). ANN uses a training algorithm to learn patterns by modifying the weights based on error rate between actual and predicted output. ANN uses back propagation algorithm as a training algorithm to discover hidden patterns inside the dataset. The universal approximation capability and flexible architecture allows ANN models to capture complex nonlinear behaviours in the dataset [50].
CNN is extensively used in computer vision-based systems that can automatically extract features and perform various tasks such as image classification and semantic segmentation. It has been successfully utilized over several challenging visual analysis tasks in agriculture, such as pest-disease identification, stress detection and weed identification; achieved tremendous performance in tasks involving visual image analysis, which were previously considered to be purely within the human realm [51]. Applying various convolutional filters, the models can extract high-level representation of data making it more versatile for tasks such as image classification (Fig. 3b). CNN has three main types of layers namely convolutional, pooling and fully convolutional layers. The convolutional layer generates the feature map capturing all essential features. The pre-trained CNN models such as LeNet [52], AlexNet [53], VGG16 [54], InceptionV3 [55], GoogleNet [56], ResNet [57], MobileNet [58], Xception [59], DenseNet [60] and Darknet53 [61] have been successfully deployed in several computer vision applications [62]. RNN is a class of artificial neural network that address time-series problems involving sequential data (Fig. 3c). Unlike feed-forward neural networks, RNNs can make use of their internal memory to process sequential data. The distinctive feature of RNNs is their capability to send data and process over time steps, the recurrent nature of RNN allows the same function of each input data, while output for present input is based on past computation. After generating the output, it is copied and transferred back to recurrent network. Thus, for decision-making process, it considers the current input and output learnt from previous input [63].
AE is a special type of artificial neural network used to learn data encoders in an unsupervised manner (Fig. 3d). The input is compressed by the AE into a lower-dimensional code and then reconstructs the final output from this representation. The encoder part of AE is used for encoding and data compression purposes and has a decreasing number of hidden units. The latent space in the network has compact or compressed form of input. The decoding part attempts to regenerate the input from encoded data and has an increasing number of hidden units [64].
3 Production agriculture
Understanding plant phenotyping assumes prominence as it is an important aspect having direct association with all the efforts for increasing food production of the world to meet ever-rising demand. The quantitative study of parameters related to plant traits such as plant growth, stress and yield using rapid and non-destructive sensing technique is an important aspect of high-throughput phenotyping [65,66,67]. Infield measurement of crop parameters can also be accelerated with advances in vision-based technology in agriculture. Abiotic stress phenotyping of plant growth [68], canopy coverage [69], leaf structure [70], weed density, root growth status [71] etc. successfully demonstrated an increasingly important as a way to explore the deep learning based smart stress management system. Plant stress occurs when abnormal environmental conditions arise as a result of biotic (insects, pests, fungus, viruses and weeds) and abiotic (water, temperature, nutrients and toxicity) elements during plant development. These plant stresses are capable of threatening global food security. Plant disease outbreaks are a persistent hazard that is widespread as a result of complicated ecological dynamics and the standard state-of-the-art mechanisms are not able to cope up with this challenge. Using image-based stress datasets holds promise and is perceived to be a possible step in the right direction to handle plant stress management. Significant advances in image processing and machine learning techniques have been made over the last decade. Deep learning based models have high accuracy and can detect plant stress quickly. This method of stress identification is non-contact type, takes less time and output can be used in real time crop health management. The standardization of visual assessments, deployment of imaging techniques and application of big data analytics may overcome or improve reliability and accuracy of stress assessment in comparison with unaided visual measurement [72, 73]. Compared to traditional computer vision engineering, deep learning assists traditional computer vision techniques to achieve higher accuracy in crop for image detection, stress identification, classification, prediction, quantification and segmentation [72]. Methodology adopted in the recent studies deploying deep learning approaches in production agriculture for measurement of plant characteristics, weed detection, biotic and abiotic stress assessment and yield parameters has been summarized and presented schematically in Fig. 4. The essence of these studies is discussed in subsequent sections.
3.1 Measurement of plant attributes
Precise seedling counting, plant stand, panicle count are vital vectors for assessing seedling vigour, estimate crop density and uniformity of emergence rate for field and plantation crops. Tassel detection and flower counting offers a new opportunity for yield estimation and optimize fruit production in plant; all this, without using automatic yield monitoring system and facilitating site-specific crop management [74, 75]. Deep convolutional neural network (DCNN) (Faster Region Convolutional Neural Networks (FR-CNN) and Convolutional Neural Networks (CNN) + support vector machine (SVM)) algorithm significantly facilitated advanced approaches for apple flower detection [74]. Wu et al. [76] in their study captured a dataset comprising 147 images under natural and uncontrolled field condition and validated appropriately with respect to a previously unseen data set. A rice seedling dataset consisting of 40 high-resolution aerial images captured and collected in situ by red green blue (RGB) camera perched in unmanned aerial vehicle (UAV); manually-dotted annotations for rice seedling counting was analysed through deep CNN-based technique. Good performance accuracy (> 93%) between manual and automated rice seedling counting (UAV image-based) heralds a new opportunity for yield estimation with high accuracy. Labourious and subjective scoring systems of cotton flowering patterns recognition and bloom detections have been replaced by deep learning approaches [77]. The promising results for characterization of flowering patterns among genetic classes and genotypes has been adopted to predict reproductive improvements and was found to be of pivotal importance for crop yield forecasting. Higher classification accuracy (> 90%) of DCNN for paddy tiller counting [78], faster R-CNN for characterization of flowering patterns for cotton plants [79], MobileNet for cotton plant detection using UAV system [77] and CNN + SVM for flower detection in apple [74] show the potential of deployment of these technique into an online embedded system for electronically connected yield estimation. TasselNetV2 + outperformed TasselNetV2 for counting of wheat (R2 = 0.92), maize (R2 = 0.89) and sorghum (R2 = 0.68) plants using high-resolution field images (1980 × 1080) in less time [80]. Further, it was reported in the study that compared to Faster R-CNN; TasselNetV2 + indicates its effectiveness and robustness in different plant dataset like wheat ears (R2 = 0.92), maize tassels (R2 = 0.89) and sorghum heads (R2 = 0.67) counting. This feature of TasselNetV2 + can be attributed to its inherent ability for encoding sufficiently good appearance features even at low image resolution and not counting repetitive visual patterns like Faster R-CNN [81, 82]. TasselNetV2, TasselNetV2 + performs better than Faster R-CNN [61] whereas Faster R-CNN performs better than TasselNet for detecting maize tassels [80, 82]. ResNet demonstrated far better results than the VGGNet when it came to detecting and counting maize tassels from original high resolution UAVs images [82]. The application of LSTM model to simulate the effect of extreme climate change, plant phenology, meteorology indices and remote sensing data across the nine states of Corn Belt of USA could predict 76 per cent of corn yield variations [83]. Considering the extreme weather conditions, the LSTM model proved to be more robust as compared to other machine learning models, like–least absolute shrinkage and selection operator (LASSO) and random forest (RF) [84, 85].
3.2 Abiotic stress assessment
Spectroscopic and imaging are noninvasive abiotic stress identification methods used for discovering deficiency (nutrient, water, seed vitality, etc.) that affect the vigour of plants. Identification of abiotic stress includes extraction of biophysical parameters of plants through canopy water content, leaf pigments, canopy nitrogen and light use efficiency from the spectral data. Digital imaging is a simple and low-cost measurement technology which will act as a power full tool when it is used along with deep learning technique for stress monitoring applications in precision agriculture. Deep learning has introduced a paradigm shift in 2D RGB image-based plant stress phenotyping [72]. A broad range of deep learning techniques have been used in crop abiotic stress phenotyping, including DCNN [86], AlexNet [87], Faster R-CNN [88], GoogLeNet [87], ResNet [89], RootNav [71], SegNet [90], SW-SVM [91], VGGNet [88, 92] and UNet [71]. Deep learning architectures have been successful on a vast range of plant abiotic stress phenotyping work, such as crop identification/recognition based on leaf vein morphology patterns [70, 93], leaf counting and tassel detection in maize and sorghum [75, 94,95,96], stalk count and width of plant [97], panicle segmentation in sorghum [98], root localization and feature detection [71, 99], bloom detection, emerging counting, flowering characterization in cotton and apple [74, 77] and soil moisture estimation using thermal image [86].
Deep learning technique has been used for identification of abiotic stresses in field crops (paddy, maize, soybean sorghum, and wheat) as well as horticulture crops (tomato potato, okra). VGG-16 architectures were found to be a capable system for recognition and classification of various abiotic stresses in different varieties of paddy crop using the 30,000 RGB images with an accuracy of 92.13 and 95.08 per cent, respectively [92]. Non-destructive imaging, such as proximal and remote sense were used for deep learning-based abiotic stress identification under field conditions with different illuminations, background, colour, size and shape crop. It was concluded that the accuracy of object detection is based on the right selection of deep learning tools, optimum number of high-resolution images and image dimensions [79]. Across all the studies it was a common observation that the deep learning-based object detectors like AlexNet, Faster region convolutional neural network (Faster RCNN), GoogLeNet, Inception V3, SW-SVM with VGG-16, RestNet, SegNet performed far better as compared to other architectures in identifying plant abiotic stresses.
In a visual assisted precision agriculture application, a DCNN model was developed to identify water stress in maize and soybean using deep learning models. Three novel frameworks, i.e. AlexNet, GoogLeNet and Inception V3 were used as an unsupervised technique to precisely separate the visual cues representing water stress on the leaf of plants. GoogleNet was found to be superior with an accuracy of 98.3 and 94.1 per cent for maize and soybean plant, respectively. It was inferred that, the digital RGB images cues contribute to the deep learning model maximally for decision management [87]. Unsupervised localization of RGB image cues is used to identify the abiotic stress level [16]. To identify and visualize abiotic stresses in horticultural crop (tomato, potato and okra) images of various nutrient (excess or deficiency), soil moisture (excess or deficient water), and canopy temperature (low or high) stress were not obtained/available from the public database. AlexNet and GoogLeNet architectures were used in most of the stress identification studies in vegetable crop and GoogLeNet outperformed AlexNet in terms of accuracy [87, 100]. All told, recent studies indicate the growing potential of deep learning applications for plant stress identification and classification pattern; perhaps, this evades the comprehensive and extravagant stress region judgment by field specialists and opens up the scope for utilization of image-based plant phenotyping leading to the development of user-friendly PA tools.
3.3 Detection and classification of plant disease
Plant health monitoring and disease diagnosis are essential in the early stages of plant growth to prevent disease transmission. It helps in effective crop management prior to significant crop damage. Plant disease identification is traditionally done manually, either by visual observation or by using a microscope. These methods are time-consuming and labour-intensive; it involves a substantial risk of misidentification due to subjective perception of the human mind. The task of plant disease identification can be accelerated by adoption of advanced technologies which are based on image processing and artificial intelligence. Deep learning, which uses good quality images as source data is gaining popularity now-a-days for crop health monitoring and management; this is in line with developing an artificially intelligent system. Deep learning architectures such as AlexNet, GoogLeNet, ResNet, VGG and DenseNet have been successfully used to identify and classify various plant diseases in food crops such as wheat [101], maize [102], rice [103] and millets [104], cash crops such as sugarcane [105], tobacco [106], cotton [107], jute [108], plantation crops such as coffee [109], coconut [110], tea [111] and horticulture crops such as tomato [112], ladyfingers [113], apple [114] and grape [115].
Many researchers have used the images of diseased plant from a public database for training the deep learning architectures. Mohanty et al. [39] trained the deep convolution neural network with 54,306 images of healthy and diseased plants available from public database and identified 14 diseases of 26 different crops using GoogLeNet architecture with an accuracy of 99.35 per cent. An open database of 87,848 images of 25 types of crop with 58 distinct classes of plant and disease was used by Ferentinos [116]. The dataset was split into an 80/20 training/testing ratio, the most commonly used for neural network applications. Deep learning architectures AlexNet, AlexNetOWTBn, GoogLeNet, Overfeat and VGG were utilized for identification of various classes. The success rate of VGG architecture was 99.53 per cent with an inaccuracy of 0.47 per cent. Too et al. [117] achieved an accuracy of 99.75 per cent with DenseNets architecture using the same database as used by Mohanty et al. [39]. Deep CNN was deployed on 70,295 images of same database and obtained an accuracy of 99.78 per cent with ResNet [118]. The accuracy of disease detection and classification was reportedly increased with the evolution of the architectures of DCNN. The black sigatoka and speckle diseases were identified and classified in banana [100]. The images were obtained from the open source, trained using the LeNet architecture and features extracted using convolution and pooling layers. Deep learning technique was able to identify and classify both diseases with 99.72 per cent accuracy. Tomatoes are susceptible to diseases such as late blight, two-spotted spider mite, target spot, leaf mould, mosaic virus and yellow leaf curl virus that reduce production and impair quality. A collection of 13,262 diseased tomato leaf images was obtained from the PlantVillage dataset to train AlexNet and VGG16 deep learning architectures for non-destructive estimation of the extent of diseases [112]. AlexNet showed a good accuracy in classification (97.49 per cent) at minimum runtime as compared with VGG16 (97.26 per cent). Ji et al. [115] proposed a UnitedModel for grape leaf disease detection based on InceptionV3 and ResNet50; and compared it with VGGNet, GoogLeNet, DenseNet, and ResNet architectures. The leaf images (1619 numbers) of black rot, esca and isariopsis leaf spot diseases were taken from the PlantVillage dataset. The UnitedModel extracts more representative features using the width of InceptionV3 and the depth of ResNet50, resulting in 98.57 and 99.17 per cent test and validation accuracy, respectively, for grape leaf disease detection.
Several researchers used a camera and a smartphone to capture digital images of diseased plant leaves and trained deep learning based algorithms for disease detection and classification. Rangarajan and Raja [113] collected 2554 digital images to classify ten major diseases that affect the leaves of eggplant, hyacinth beans, lime and lady finger plants. Six pre-trained CNN models viz. AlexNet, VGG16, VGG19, GoogLeNet, ResNet101 and DenseNet201were used for identification and classification of different diseases. Among all the architectures tested, GoogLeNet performed better, with a validation accuracy of 97.3 per cent. Prune crops such as peach, cherry and apricot are widely grown in temperate and subtropical region. Virus-infected prune trees have a growth depleted by 10–30 per cent which results in a decrease in yield by over 20–60 per cent as compared to the healthy trees [120]. Deep learning approach was used for plant disease and pest detection in prunes [121]. A total of 1995 images of eight different diseases and pest-affected plant leaves were collected for the experiment. The transfer learning based pre trained deep learning models GoogleNet, AlexNet, VGG16, VGG19, ResNet50, ResNet101, Inception-V3, InceptionResNetv2 and SqueezeNet were used for feature extraction. The performance of the various extracted features was measured by SVM, Extreme Learning Machine (ELM) and K-Nearest Neighbours (kNN) classifiers. The maximum accuracy of disease detection was achieved at 97.86 per cent in ResNet50 with the SVM classifier. Plantation crops such as cotton, coffee, tea and sugarcane are widely cultivated and have high economic value. Infestation of such crops with diseases brings forth a huge economic shock for the farmers. Deep learning-based techniques have been used by researchers for precise disease management and improvement in quality by minimizing yield loss. Manually captured 13,842 images of diseased plants were used to train and test a DCNN model for the recognition of smut, grassy shoot, rust and yellow leaf diseases in sugarcane crop [105]. The sugarcane diseases were successfully identified and classified with an accuracy of 95 per cent. Esgario et al. [109] used smartphones to collect images of coffee leaves (1747 numbers) infected with leaf miner, rust, brown leaf spot and cercospora leaf spot diseases. The AlexNet, GoogLeNet, VGG19 and ResNet50 architectures were used for classification and estimation of disease severity. It was observed that the performance of ResNet50 was better than the rest with an accuracy of 95.63 per cent. Hu et al. [111] collected 144 images of diseased tea plant leaves, such as leaf blight, bud blight and red scab, to improve the performance of CIFAR10 model for disease identification with a small number of images. The number of model parameters was reduced in the proposed deep learning architecture to improve the detection process. The improved CIFAR10 model correctly identified tea leaf diseases with 92.5 per cent accuracy. Detection of diseases (cercospora, bacterial blight, aschocyta blight and target spot) in cotton leaves could be achieved with 96 per cent accuracy after training DCNN on 500 manually collected images [107]. The rice, wheat, maize and soybean are cultivated on large scale worldwide and are considered as important food and feed grains. Diseases can spread easily in these crops resulting in significant yield losses. Numerous studies have been conducted for the identification and classification of diseases in food grain crops using deep learning based techniques. A deep CNN based algorithm has been used to classify blast, bakanae, false smut, brown spot, sheath blight, bacterial leaf blight, sheath rot, bacterial sheath rot, bacterial wilt, seeding blight diseases in rice crop [103]. Model was trained by the images of diseased plants captured with a camera and some images gathered from public sources (a total of 500). The accuracy of DCNN for disease classification was found to be 95.48 per cent. Again, Lu et al. [101] used 9230 wheat plant images from a public database to train deep learning architectures for recognizing powdery mildew, stripe rust, smut, leaf blotch, black chaff and leaf rust diseases in wheat plants. VGG-FCN-VD16 and VGG-FCN-S were found to have recognition accuracy of 97.95 and 95.12 per cent, respectively. Wu et al. [122] identified the bacterial rot, downy mildew, pest and spider mite diseases in soybean after training deep learning models with 1470 leaf images. ResNet outperformed other architectures such as AlexNet and GoogLeNet demonstrating an accuracy of 94.29 per cent. Deep learning based approaches for disease detection and classification have been employed by numerous researchers in a variety of crop. In addition, the deep learning architectures have been updated to improve accuracy and make better predictions in challenging environments. Plant pathologists and farmers will be able to diagnose plant diseases early and take necessary precautions with on-the-go application of deep learning based technologies.
3.4 Yield attributes and harvesting
Detection, counting and size estimation are critical tasks for fruit harvesting and yield estimation. Research is progressing in the direction of vision-based systems for autonomous fruit harvesting. In a robotic fruit picking harvester, vision system and manipulator system are the two distinct components. Fruits attached to plants between the leaves, stems and branches are identified primarily through the vision system. Numerous researchers have used the feature extraction characteristics and autonomous learning ability of deep learning in the vision system for an effective detection, counting and harvesting of fruits. LedNet is a deep learning based framework for real-time apple detection reported to be useful in orchard harvesting [123]. The developed framework was robust and efficient, performing detection tasks with a recall and an accuracy of 0.82 and 0.85, respectively. Onishi et al. [124] implemented the VGG16 architecture for detection of apple after receiving the image from a stereo camera. Sa et al. [125] proposed a deep learning based technique for fruit detection after fine-tuning the VGG16 network with a pre-trained ImageNet model. The output thus obtained could be used for fruit yield estimation and for automatic harvesting. The F1 score for rock melon, sweet pepper, apple, avocado, orange and mango were 0.85, 0.84, 0.94, 0.93, 0.92 and 0.94, respectively. It was observed during the course of this study that scores were affected by the complexity of fruit shape and similarity of colour with plant canopy. ResNet50 combined with Feature Pyramid Network architecture along with Mask R-CNN was used for the detection of strawberry [126]. This approach could overcome all the limitations of strawberry fruit identification under typical field condition, like multi-fruit adhesion, overlapping, field obstacles and varying light conditions around the plants. The detection by the trained model with precision, recall and mean intersection over union was 95.78, 95.42 and 89.95 per cent, respectively. Afonso et al. [127] used the Mask R-CNN algorithm for detection of tomato in a greenhouse. The performance of Mask-RCNN for tomato detection was found superior than machine learning approaches used by Yamamoto et al. [128] and the Inception-ResNet based architecture of Rahnemoonfar and Sheppard [129]. Estimating the size of broccoli is crucial for determining its harvestability and yield. Blok et al. [130] used a deep learning algorithm called the occlusion region-based convolution neural network (ORCNN) for dealing with occlusions and assessed the size of broccoli. The ORCNN outperformed the Mask R-CNN with 487 broccoli images, a mean sizing error of 6.4 mm was recorded instead of 10.7 mm as in case of Mask R-CNN. Integration of vision-based systems and deep learning approaches for fruit detection, counting and yield estimation has sped up automation in harvesting [131]. It will be easier to cope up with labour—intensive operations by adopting deep learning-based technology in the harvesting of agricultural produce.
3.5 Weed detection
Weed infestation in crops is one of the most serious issues confronting modern agriculture. Weed control is currently carried out by manual hand tools, with the use of weedicides and modern weeding machinery. Various ground-based weed identification and management techniques including artificial neural networks [132], image processing [133], Internet of Things [134] and spectral reflectance [135] have been studied for weed management in crops. The use of a deep learning approach for selective weeding has been reported to be an effective weed control method [136]. The CNN technique was used to detect weeds in spinach and bean crops using an unsupervised training dataset [137]. The proposed system detected crop rows automatically, identified inter-row weeds, created a training dataset and used CNNs to build a model for detecting crop and weeds from a repository of UAV collected images. Ferreira et al. [138] used a UAV to capture field images (400 numbers) and applied machine learning and deep learning techniques to detect weeds in 15,336 segmented images of soil, soybean, grass and broadleaf weeds. The ConvNets detected weeds more precisely and achieved higher accuracy (> 99 per cent) as compared to SVM, Adaboost–C4.5 and RF. The VGGNet, GoogLeNet and DetectNet architectures were used for detection of weeds in bermuda grass. It was observed that VGGNet performed better in the identification of dollar weed, old world diamond-flower and Florida pusley with an F1 score of more than 0.95, whereas, DetectNet had a high F1 score of > 0.99 in detecting bluegrass [139]. Deep learning techniques were deployed on 17,509 captured images for classification of eight different weed species [140]. A classification accuracy of 95.1 and 95.7 per cent was obtained in Inception-V3 and ResNet-50, respectively. The ResNet-50 architecture was implemented in real time, yielding an inference time of 53.4 ms per image. Osorio et al. [141] used machine learning and deep learning techniques to detect weeds in lettuce crops from drone collected field images. The F1 scores of SVM, YOLOV3 and Mask R-CNN were 88, 94 and 94 per cent, respectively. Faster R-CNN and Single Shot Detector were used to detect weeds in mid to late season soybeans [142]. Faster R-CNN performed better in terms of precision, recall, F1 score, Intersection over Union and inference time. The ground ivy, dandelion and spotted spurge weeds were successfully detected in ryegrass using deep learning models on a dataset that included 15,486 negative images (no target weeds) and 17,600 positive images (target weeds) [143]. The VGGNet outperformed AlexNet and GoogLeNet in detection of weeds with an F1 score of 0.93 and recall values of 0.99.
The distribution of weeds in the field is usually in patches, but weedicides are sprayed evenly throughout the field, irrespective of the actual requirement. Hence, acquisition of images of the entire field with weed localization using deep learning techniques will be of great help for site specific weed management.
4 Water management
Modelling of hydrologic cycle components such as precipitation, runoff, evapotranspiration (ET) and change in soil moisture is essential for quantification of water balance for sustainable water resource management and planning of irrigation and drainage systems [144]. Estimation of reference evapotranspiration (ET0) for irrigation water management requires a number of climatic parameters such as temperature, wind speed, relative humidity and solar radiation. However, under limited data availability, empirical methods based on temperature or humidity or radiation can give a good approximation of ET0. Such empirical methods give reliable estimates at a particular region only or may overestimate/underestimate values [145,146,147]. Soil moisture prediction/estimation is challenging task due its spatio-temporal variability across the field. Number of sensor based, empirical and statistical techniques are in vogue for indirect estimation of soil moisture at local or regional scale. Several studies are conducted highlighting the importance of pedotransfer function (PTF) in the estimation of different soil moisture regimes, especially for the estimation of field capacity and permanent wilting point, as an important indicator for the estimation of soil moisture content (SMC). [148,149,150], but for the development of such PTF, different soil physical properties such as soil texture (particle size distribution), bulk density and organic matter content (OMC), are required, which makes the estimation of SMC a tedious task and prone to lesser accuracy, when considered a larger scale. With the advent of high computation power, and with the increased capabilities to handle big data even accumulated over several decades and widely approved capacity of handling such data by the deep learning modelling techniques, these models are being realized tremendously by the virtue of their well proven capability to extract information for prediction and better realization of land atmosphere interaction without going into deep of the complex mechanism associated with the physics involved in the process under consideration [151, 152]. The following section elaborates the estimation of ET0 and soil moisture using different deep learning techniques with respect to data availability, lead time, interval, etc.
4.1 Estimation of evapotranspiration
Evapotranspiration is considered as one of the most important components of hydrologic cycle which governs irrigation water management, water resources management, and hydrologic studies [153,154,155, 157]. The ET0 is derived using number of climatic variables and in combination of crop coefficient (Kc) is applied for the estimation of crop water requirement of a particular crop [156]. Different techniques based on artificial intelligence such as machine learning and deep learning using the limited weather data have the great potential for indirect estimation of ET0 with the development of associated robust models [145, 157].
Machine learning (ML) and deep learning (DL) models for estimation of ET0 from limited hourly data of few climatic parameters has been successfully applied by Ferreira and Da Cunha [157]. In this study, CNN model was applied to estimate daily ET0 using limited meteorological data such as temperature, relative humidity, and terrestrial radiation, and compared the results with those obtained using traditional ML models (i.e. RF, XGBoost and ANN). Performance of ANN models were found slightly better than other traditional models, but CNN model outperformed in comparison with the remaining models for all the combinations of inputs. Overall, the results stated that CNN developed using 24 h hourly data and hourly radiation applied to sequential data reduced root mean squared error (RMSE) by 15.9–21 per cent, increased Nash–Sutcliffe efficiency (NSE) by 4.6–8.8 per cent and improved R2 compared to machine learning models at regional and local scales. Besides agriculture and forest dominant locations, estimation of ET0 in urban areas have their own importance in greenery management and dealing with the climate change. Even though such studies are limited, one of such studies reported similar performance of 1D CNN deep learning model and random forest (RF) for prediction of urban ET at half hourly scale by Vulova et al. [158]. In another study dealing with the estimation of ET0 in urban areas, deep learning multilayer perceptron models were applied to estimate the daily ET0 in the Indian cities of Hoshiarpur and Patiala, and their performance were compared with the Generalized Linear Model (GLM), Random Forest (RF), and Gradient-Boosting Machine (GBM). The performance of deep learning models was superior (NSE, R2, mean squared error (MSE) and RMSE) for estimating ET0 over traditional models like RF, GBM and GLM [159]. Roy [160] evaluated the performance of LSTM and bi-LSTM network for predicting one step ahead ET0. Bi-LSTM resulted in highest R2, NS, IOA along with the lower RMSE, relative root mean squared error (RRMSE), and mean absolute error (MAE) compared to other soft computing techniques such as Sequence-to-Sequence Regression LSTM (SSR-LSTM) and Adaptive Neuro Fuzzy Inference System (ANFIS) models. Better performance of Bi-LSTM compared to LSTM for the estimation of ET0 and SM was also reported by Alibabaei et al. [161]. In another study by Yin et al. [162] a single-layer Bi-LSTM model with 512 nodes was applied to forecast short-term daily ET0 for 1–7 day lead times. The hyperparameters such as learning rate decay, batch size and dropout size were determined using the Bayesian optimization method and the training, validation and testing of the developed model was done at three different locations in semi-arid region of China. The performance of bi-LSTM model was evaluated using the Penman–Monteith based daily ET0 to forecast short-term daily ET0. Among several meteorological input dataset, bi-LSTM model with only three inputs (i.e. maximum temperature, minimum temperature and sunshine duration) performed the best to forecast short-term daily ET0 at all the meteorological stations. In another study by Afzaal et al. [163], the highest contributing climatic variables for predicting ET0, namely maximum air temperature and relative humidity, were selected as input variables to the LSTM and bi-LSTM models and were trained and tested using the data for the years of 2011–2015 and evaluated 2016–2017, respectively. The results stated that both the models, i.e. LSTM and bi-LSTM were suitable for estimating ET0 with lower RMSE (0.38–0.58 mm/day) for all sites during the testing period. Overall, no significant differences in accuracy of LSTM and Bi-LSTM compared to FAO 56 method for prediction of ET0 was observed. Proias et al. [164] applied time lagged RNN to predict near future ET0 in Greece. Higher values of R2 and lower RMSE were recorded for prediction of ET0 which were in good association with FAO-56 Penman Monteith method. A performance comparison of ET0 estimation using deep neural network (DNN), temporal convolution neural network (TCN), LSTM with other machine learning models such as RF and SVM and also models based on empirical relationships using different climatic dataset such as temperature, humidity, radiation revealed that temperature-based TCN had higher R2 and lower RMSE as compared to other machine learning and empirical models [165].
Geographical bearings in deep learning was observed when it was found that LSTM performed better in arid regions whereas nonlinear autoregressive network with exogenous inputs (NARX) in semi-arid region of US while predicting ET0 of 1–7 days [166]. However, the accuracy of the models reduced once the period of prediction exceeded 7 days. Monthly average data of climatic parameters were used as an input for support vector regression (SVR), Gaussian Process Regression (GPR), BFGS-ANN and LSTM models for predicting ET0 in arid and semi-arid climates of Turkey [167]. Among different models, Broyden–Fletcher–Goldfarb–Shanno artificial neural network model performed superior for estimation of ET0.
4.2 Soil moisture estimation
The soil moisture plays an essential role and is an important variable in different studies related to water balance, hydro-climatological and ecological systems and dominate its influence on the exchange of water and energy fluxes in understanding different environmental processes juxtaposed with land surface states [168]. The determination of point soil moisture in terms of field capacity (FC), permanent wilting point (PWP), etc. often are carried out for a smaller area or with limited areal extent which itself requires detailed analysis of soil sample in laboratory that are generally labourious and time consuming and difficult to replicate the observation and analyses due to the high variability of SMC in respect to time and space [169, 170]. Further, retrieving precise soil moisture at local, regional and global scales have a significant role in addressing many practical applications, including weather forecasts [171,172,173], drought and flood potential assessment [174,175,176], biogeochemical process characterizations [158], best agricultural and irrigation practices [178]. It shows that accurate and precise prediction of SMC has major contribution in providing estimates for taking effective disaster response, better estimation of crop water requirement and irrigation scheduling and other applications [179]. Estimation of soil moisture through process-based models is plagued by the under representation of key processes, excessive human influence and computationally exhaustive [180]. Deep learning has tremendous capabilities for soil moisture estimation as an alternative to conventional physically based models using satellite data. Song et al. [181] presented a deep belief network coupled with -macroscopic cellular automata (DBN-MCA) model by combining DBN and MCA model for the prediction of SMC corn field located in the Zhangye oasis. It was observed through the cross validation results that inclusion of the static and dynamic variables as inputs, DBN-MCA model performed better by showing 18 per cent reduction in RMSE as compared to that of MLP-MCA model. Tseng et al. [182] presented a simulation system for generating synthetic aerial images and for learning from it to simulate local SMCs using traditional as well as deep learning techniques. It was presented in the study that for most of the experiments, performance of CNN correlated field (CNNCF) method was the better with its test error as compared to the other methods such as (a) constant prediction baseline, (b) linear Support Vector Machines (SVM), (c) Random Forests, Uncorrelated Plant (RFUP), (d) Random Forests Correlated Field (RFCF), (e) two-layer Neural Networks (NN), (f) Deep Convolutional Neural Networks Uncorrelated Plant (CNNUP). In another study [86], CNN-based regression models were applied to estimate soil moisture integrating the temperature of plant (represented through the thermal infrared images obtained through drone-based sensors), and in situ measurements of soil moisture in the experimental farm. Three different machine learning techniques including deep learning, ANN and kNN was applied to estimate FC and PWP using PTF for the combinations of four soil dataset located in Konya-Çumra plain, Turkey [183]. It was observed that the deep learning modelling techniques using inputs of soil physical properties including the aggregate stability presented the best performances in the estimation of FC for samples of calcareous soils. In another pioneering study Yu et al. [184] modelled soil moisture using hybrid deep learning techniques at four different depths using SMC, climatological data, SWC and crop growth stage data from seven maize monitoring stations (during the 2016–2018), located in Hebei Province, China. It was presented in the study that hybrid modelling technique comprised a CNN-based ResNet and bi-LSTM model performed better than the traditional ML-based techniques such as MLP, SVR, and RF. In continuation to establish the further improved capabilities of deep learning modelling technique, Yu et al. [185] proposed a hybrid modelling technique combining capabilities of CNN and gated recurrent unit GRU (CNN-GRU) model developed using the SMC and climatologic data obtained from five representative sites, located in Shandong Province, China. Better performance was reported with the proposed hybrid CNN-GRU modelling techniques in comparison with the stand alone CNN or GRU model in terms of different performance indicators.
Further, deep learning has been proposed as an alternative to conventional physically based models for soil moisture estimation using satellite data. In the past decade, the estimation of SMC capabilities of remote sensing images including the advanced microwave scanning radiometer (AMSR) [186], the Advanced Scatterometer [187], the soil moisture and ocean salinity (SMOS) [188], the soil moisture active passive (SMAP) [189], among others, coupled with GIS techniques have widely been explored, and it has also tremendously improved the measurement accuracy and efficacy of SMCs [188]. Further, Microwave remote sensing having the capability to penetrate through the clouds and up to certain depth into the soil surface, provides estimate of SMC by using the soil dielectric properties with good consistency over a large spatial scale [190]. In estimation of SMC using satellite imagery, vegetation over the land surface is found as a major constraint for detecting signals of stored water within soil profile as vegetation attenuates soil emissions and also adds its own emissions to the microwave signal causing further error on actual emission from the soil profile [191]. Fang et al. [192] in a novel effort applied combination of RS and deep learning modelling techniques to developed a CONUS-scale LSTM network for the prediction of SMAP data and showed that the proposed modelling framework exhibits better generalization capability, both in space and time, Overall it was observed in this study that the proposed approach of using deep learning techniques for modelling soil moisture dynamics and for projecting SMAP is very efficient even by using shorter length of dataset. Zhang et al. [193] proposed a deep learning model for the estimation of SMC in China using Visible Infrared Imaging Radiometer Suite (VIIRS) remote sensing imagery as inputs. The study demonstrated the capabilities of deep learning modelling technique to capture in situ surface SMC using the VIIRS imagery in terms of better coefficient of determination (R2 = 0.99) and lower root mean squared error (RMSE = 0.0084). These results were found to be better than the soil moisture products obtained from SMAP and the Global Land Data Assimilation System (GLDAS) (0–100 mm). Lee et al. [194] employed deep learning modelling technique to estimate soil moisture over the Korean peninsula observed and thermal products from satellite. They also compared performance of the proposed modelling technique with the soil moisture products of AMSR2 and GLDAS. Wang et al. [195] developed a soil moisture inversion model (SM-DBN) by using a DBN to extract soil moisture data from Fengyun-3D (FY-3D) Medium Resolution Spectral Imager-II imagery in China. The developed model outperformed the other conventional models of linear regression (LR) and ANN, in terms of different performance indicators based on simulated and the actual ground measurement data. Masrur Ahmed et al. [196] applied deep learning hybrid models (i.e. complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN), (convolutional neural network-gated recurrent unit) CNN-GRU) for prediction of daily time-step surface SMC and demonstrated the prediction capability of the hybrid CEEMDAN-CNN-GRU model. The hybrid CEEMDAN-CNN-GRU model was built by integrating MODIS sensors (satellite-derived data), ground-based observations, and climate indices tested at important stations in the Australian Murray Darling Basin.
5 Post production interventions
Postharvest loss of food crops is not only the loss of food, but in a wider perspective is a loss of the natural resources, agricultural inputs and most importantly loss of hunger. The gargantuan volume of these losses has prompted researcher, policy makers and funding agencies alike to time and again rethink the actions and approaches that could possibly reduce if not eliminate this malaise. Besides the postharvest losses which by all generous estimate hovers around 20–23 per cent [21], there is another component termed as “food waste”, this amounts to 30 per cent loss of agricultural produce at the retailer and consumer end combined [197]. While postharvest losses are quantitative in nature and are mainly caused as a result of paralyzed managerial and technical competencies, the food waste is primarily associated with unconsumed food due to consumer behaviour, regulations and quality standards.
Rapid quality evaluation of agricultural produce is widely carried out by infrared spectroscopy combined with appropriate chemometrics [198]. Quality of the food product is generally ascertained by its moisture, protein and fat content or by the variations in it [199]. Interaction of the incident light and its subsequent scattering upon the chemical molecules express the characteristics of a food sample. The information about the quality of the food sample is obtained by the linear and nonlinear chemometric methods to provide rapid information about the internal and external quality of the agro-produce [200]. All said, generalization of sequential chemometric methods are a distant possibility due to the inherent heterogeneity of the samples owing to their biological nature. This introduces spectral variability, redundant data and optical noise all this hindering the feature extraction by the chemometric methods. Thus, there is widespread divergence between the calibration and target datasets creating enough limitations for spectral analysis.
It is well established that deep learning models are capable of demonstrating powerful capabilities of solving complex problems rapidly. This is attributed to the robustness of the models bolstered by deep neural network architectures. The automatic learning feature of deep learning models makes it a very suitable approach in the area of postharvest quality of agricultural produce, in terms of it’s geographical origin, identification, morphological features, composition, texture, soluble solid content, etc. [49, 201, 202]. Deep learning approaches facilitating image-based applications with improved accuracy and resilience for various postharvest intervention of agro-produce is figuratively explained in Fig. 5 and discussed in the following sub-sections.
5.1 Identification of varietal variability
Geographical origins have noteworthy bearings on the various attributes of agricultural products. Environmental variations entail introduction of varietal differences on the basis of composition, morphology and economic value. Deep learning models working with infrared spectra have demonstrated adequate instances of success in the identification of geographical origins of many agro products (Table 1). While flavour causing chemicals determines the success with coffee beans [203, 204], the colour and variations in the ratio of the fatty acids proved deep learning effective for olive oils [203]. For apples it is the variations in the cellular structure and related external/internal features (e.g. soluble solid content) that helped in successful application of deep learning models [205]. Pure seeds form the basis of a sustainable agricultural ecosystem. Varietal identification of seeds assumes critical importance for the growers, as well as for the breeders to ensure desired productivity and product quality. During the literature search it was observed that CNN models can be successfully used to predict the purity of rice seeds [206], hybrid loofah and okra seeds [207] and oat seeds [208]. Difficult to visually recognize herbal medicinal plant seeds were also found to be successfully classified by deep learning models [204]. It has been reported that effectivity of deep learning classification is way beyond a two-level classification. Deep learning approaches with CNN models powered with increased number of trainings can perform far better than kNN and SVM models [206]. Citing the need for a rapid and efficient means for selection/classification of loofa (Luffa aegyptiaca) seeds of intended progeny, an NIR-HSI (975–1648 nm) combined with deep learning approach was developed with 6136 hybrid okra and 4128 loofa seeds of six varieties [207]. The Deep Convolutional Neural Network (DCNN) discriminant analysis model had an accuracy of more than 95 per cent and can be adopted for automated selection of cross bred progeny. Oat seeds were discriminated based on their varieties by using HSI (874–1734 nm) and DCNN. It was concluded that HSI combined with an end-to-end DCNN could be a potent rapid tool for visualization of accurate (99.2 per cent) variety classification for oat seeds [208]. The same approach reaped similar results while handling seven varieties of Chrysanthemum comprising 11,038 samples. Here, the DCNN was based on spectra obtained from the full wavelengths to give results with an accuracy of 100 per cent for training and testing set [209]. The superiority of deep learning approach based on CNN model over the conventional methods (partial least squares-linear discriminant analysis and PCA with logistic regression) was achieved while handling NIR dataset for grapevine classification [210]. Considering the decrease in the number of experts for grape variety identification, deep learning techniques were employed for digitization in viticulture [211]. The models were trained with multiple features of a grape plant, e.g. leaves, fruits, etc.; eventually the models were combined into a single model for grape variety (five) identification. The accuracy achieved by the single model which was called ExtRestnet (99 per cent) was far superior to the accuracy achieved individually by the Kernelwise Soft Mask (KSM) (47 per cent) and Restnet (89 per cent) models. These findings hold the key to the future of identifying type-dependent diseases or any special fungal disease in grape.
5.2 Qualitative analysis of agro produce
As has been mentioned in the above section that one of the major causes of food waste is rejection or non-consumption of fruits due to poor quality. For most of the fruits, quality means the existence of adequate amount of desired taste and exact tautness all across the fruit surface. While taste can be demonstrated by soluble solid content (SSC), the texture of the fruit is essentially indicated by firmness which in turn depends on the right amount of moisture, uniform ripening and a puncture-less or bruise-free outer skin. Deep learning based spectral analysis demonstrated considerable success in predicting the SSC and firmness for Korla fragrant pear analysis while applying SAE-FNN model using VIS and NIR spectral data [212]. Sweetness of orange juice was accurately predicted in terms of the saccharose concentration by using a three-layer CNN while adopting a deep learning approach [213] as compared with conventional chemometric methods. Black goji berries are a store house of bioactive components with high medicinal value, deep learning approach achieved very good results in predicting the phenolics, flavonoids and anthocyanin content of juice as well as the dried berries [214]. Mishandling of the fruits during harvest and transportation results in cuts, bruises and fissures on the outer skin and cellular damage deep inside the skin. These injuries cause the fruits to go under stress, resulting in an enhanced respiration rate and rate of senescence. All this translates into a change in the biochemical properties of the fruits which can be captured by NIR spectra, a deep learning based qualitative analysis for winter jujubes was carried out by Feng et al. [215]. A 2B-CNN model was found to be highly robust for feature selection in detecting bruising of strawberry when the input dataset was a fusion of spectral and spatial data [204]. Attempt has been made to estimate the stages of ripeness in strawberry using a combination of HSI and deep learning [216]. Feature wavelengths were selected using a sequential feature selection algorithm and 530 nm was found to be the most important wavelength for field conditions. AlexNet (a popular deep learning model) CNN was observed to have a prediction accuracy of 98.6 per cent for detection of ripeness in strawberries. Findings of these types can be utilized for the development of real-time precision strawberry harvesting systems. There are some other instances of fruit classification based on colour, shape and texture [217,218,219] wherein absence of deep learning like models restricted the capability of the models for similar fruits of a particular species only. However, the same classification factors could be more accurately used with k-NN algorithm [220]. Evidences and instances discussed above indicate the strategic superiority of deep learning methods over the traditional data analysis methods, it would therefore be pertinent to conclude that further studies can be conducted in the future for quality detection of fruits.
There are a few instances where deep learning has been initiated for quality related evaluation of vegetables. Cucumbers are subjected to damage caused due to pests, insects, transportation induced surface discolouration, etc. Stacked sparse autoencoder (SSAE) in isolation and coupled with CNN has been attempted for deep feature representation and classification of damaged cucumbers based on the HSI [221]; defected region was screened out by CNN model based on the RGB channels, while the mean spectra of this defected area was used for SSAE-CNN classification. An accuracy of 91.1 per cent was achieved by this classification method. A very simple basis of grading for okra can be the length of pod. Deep learning models were used upon a dataset of 3200 images [222]. The accuracy exhibited by the models, AlexNet, GoogLeNet and ResNet50, was 63.5, 69.0 and 99.0 per cent, respectively. Potatoes are one of the most popular food crops, but they are prone to be infected with viruses. Deep learning with HSI using fully convolutional network was successfully applied for the detection of Potato virus Y [223]. Classification of tomatoes on the basis of seven selected species was attempted by using deep learning [224]. A network comprising four CNNs was trained to forecast the tomato species with an accuracy of 93 per cent. Tomato was classified on the basis of exterior surface defects by using 43,843 images as dataset for deep learning [225]. Feature extraction was carried out by trained ResNet classifiers, which was found competent to identify surface abnormalities issues with tomatoes. Deep learning based rapid recognitions system for identifying nutrition disorders in 11 kinds tomatoes was developed by pre-processing the dataset by a pre-trained Enhanced Deep Super-Resolution Network technique [226]. This deep learning-based technique could attain an accuracy of 81.11 per cent, much more than the existing techniques with the same objective. Predicting the actual oil yield from an oil palm plantation is tricky as in no way it can be judged that what can be the number of mature oil-bearing crowns. Deep learning with two different CNNs was applied to identify and forecast the quanta of mature and young oil palms by using the satellite images [227]. The forecasting outcomes were exported to a geographic information system software for mapping the grown and young palms, accuracies were 92.96 and 95.11 per cent for mature and young oil palm, respectively. Some work in deep learning has been carried out with dates; distinction of healthy dates from the imperfect ones and date yield. The difference in the growing phases between the healthy and imperfect dates formed the basis of modelling [228]. The study was conducted across four classes of dates, Khalal, Rutab, Tamar and defective dates, using a CNN technique with VGG-16 architecture. This study yielded results with 96.98 per cent accuracy. Mature and premature date images (8000) formed the dataset for deep learning-based tool for prediction of the type of date, maturity and harvesting decision with an accuracy of 99.01, 97.25 and 98.59 per cent, respectively [229].
5.3 Detection of food contamination
Contamination of agro produce with foreign materials can be caused due to poor agricultural inputs (polluted water, inconsistent fertilizers, etc.), improper handling (field dirt, crop residues, etc.) and wrong storage conditions (fungi, beetles, pesticide residues, etc.). Ingestion of such foods may result in detrimental changes to the physiological conditions of humans leading to onset of various co-morbidities. A brief account of the work carried out in the area of determination of contaminants in food by applying deep learning tools is reported in this section. Published work which embodies the usage of traditional machine learning algorithms for detection of food contaminants is existent [230,231,232]; however, the application of deep learning with similar objective is sparse. Perhaps desperate attempts are required to fully utilize the potential of deep learning to replace traditional machine learning methods [200]. Prediction of morbidity arisen due to gastrointestinal infections caused as a result of contaminated food has been attempted by using DNN [233]. A target region in China was the locale of this study comprising 227 contaminants in 119 types of widely consumed foods. The features of the contamination indexes were extracted by DDAE which is structurally similar to SAE with multiple hidden layers. Deep denoising auto encoder (DDAE) model was found to perform better (success rate 58.5 per cent) than conventional ANN algorithms. Manual detection of foreign objects perched in the different locations over juglans is tough and inconsistent. Complex shape of juglans results in improper image segmentation resulting in an inefficient machine vision approach. However, a deep learning approach comprising multiscale residual fully convolutional network was found to be efficient in image segmentation (99.4 per cent) and feature extraction of juglans [234]. The proposed method could detect and correctly (96.5 per cent) classify, leaf debris, paper scraps, plastic scraps and metal parts, clinging to juglans. A complete cycle of segmentation and detection took a time of less than 60 ms. Occurrence of pest fragments in stored food samples is obvious and rampant, human interventions are time-consuming and prone to errors. Deep learning approach was applied for rapid identification of 15 stored food products which are frequently contaminated with beetle species [122]. Convolutional neural network was trained on a dataset comprising 6900 microscopic images of elytra fragments. The model performed with an overall accuracy of 83.8 per cent. Pesticide residues are a common contaminant for fruits, its presence poses a serious threat for the fact that fruits are consumed as a table food. Apples with four pesticides (chlorpyrifos, carbendazim and two mixed pesticides) at a concentration of 100 ppm were imaged using a hyperspectral camera, in all making a dataset of 4608 images of each category [235]. The normalized (227 × 227 × 3 pixels) images were used as input to the CNN network for detection of pesticide residue. At a training epoch of 10, the accuracy of detection for test set was 99.09 per cent. Thus, this method demonstrated an effective non-contact technique for detection of pesticide residue in harvested apples. Adulteration of foods for eliciting enhanced monetary returns is a malpractice spread all across the globe. Milk is adulterated by a variety of substances that threatens well-being of humans. Spectral data from infrared spectroscopy was used for binary classification of (un)adulterated samples using a CNN model with Fourier transformed data [236]. The model was found to be 98.76 per cent accurate, far better than gradient boosting and random forest machine methods. Again, CNN models were found to be very accurate in determining the adulteration of different meats, chicken, turkey and pork [237]. Equipped with mid infrared spectral data, CNN models were found to be very accurate in classifying strawberry and non-strawberry purees [203]. It can hence be understood that deep learning has touched almost all aspects of food contamination successfully. This technique therefore poses to be a potent tool for rapid, non-contact and effective means of contamination detection in agro produce.
5.4 Food quality sensors
Precise non-invasive discrimination in terms of quality of foods has been made possible by the advent of electronic and multi-sensor technology applications. Human vision and gustatory system can be effectively mimicked with reasonable accuracy by electronic eye (EE) and voltametric electronic tongue (VET), respectively. These instruments can provide comprehensive information about the subject, rapidly. While the EE captures the colour and optical texture of the samples and compiles the result as overall appearance of the subject, the VET has an array of sensors which are titillated to send signals by the dissociative ions of a liquid sample [238, 239]. Instrumentation requirement for rapid detection of food quality has had a paradigm shift with the advent of EE and VET [240]. A deep learning algorithm was used to extract the feature and to non-destructively discriminate pu-erh tea based on its storage time (0, 2, 4, 6 and 8 years) with the help of data fusion strategy applied to the signals from an EE and VET combine [241]. The main parts of EE system were: an eyepiece, a stand, an LED lamp with adaptor. The eye piece was adjusted to capture a clear image of the pu-erh tea to gather all the relevant information. The VET employed in this study [242] comprised: a signal-conditioning circuit; an array of sensors (glassy carbon, tungsten, nickel, palladium, titanium, gold, silver and platinum, auxiliary electrode) with a standard three-electrode system for all the eight electrodes. There was a reference electrode of Ag/AgCl; the response signals from sensors were collected by a DAQ card (NI USB-6002, National Instruments, USA); LabVIEW software was used to control the DAQ card and analyse the collected signals. Deep learning was introduced to eliminate the manual intervention for feature extraction from the EE and VET signals. In-turn, 1-D CNN and 2-D CNN were used on behalf of the deep learning algorithm which leads to an overall improvement in the recognition of the data patterns as compared to the conventional techniques. This was followed by application of Bayesian optimization for selection of the optimal hyper parameters of the CNN model. The supreme novelty of this study lies in the fact that instead of individual data feeding, EE-CNN or VET-CNN, the data from the hardware was fused and fed to the deep learning algorithm in such a way that led to a more accurate and robust classification. This study opens a new window of opportunity for employing deep learning to intelligent sensory analysis which shall lead to a reliable, intelligent and non-destructive quality control tool open to be used for products other the pu-erh tea as well.
6 Dataset resources
Chemical and physical features of the fruits in terms of shape, nutrient content, maturity, firmness, damage, disease, etc., is reflected by the RGB images or in the spectral information which can be interpreted and classified using deep learning models. Breakthrough results in deep learning application for quality evaluation of agro produce shall require good input data as well. A dynamic dataset of high-quality fruit images called Fruit-360 [131] was developed with the sole purpose of creating deep learning models, care has been taken to capture images with uniform background and with measures to minimize noise. A collection of 87,848 expertly annotated images in the database, divided into 58 classes, each of which is defined as a pair of plant and a related disease, with some classes including healthy plants can be found in PlantVillage created by Hughes and Salathe [243]. There are 25 different healthy and diseased plants among the 58 classes. More than a third of the images (37.3 per cent) were taken in the field under actual conditions. Wheat Disease Database 2017 is a collection of 9230 wheat crop images with 6 diseases of wheat crop (smut, powdery mildew, stripe rust, black chaff, leaf rust and leaf blotch) and a healthy class, which are annotated in image level by agricultural experts [101]. Multi-class Pest Dataset (2018) is a collection of 88,670 images with 582,170 pest objects divided into 16 categories [244]. A large multiclass dataset called DeepWeeds [140] which comprises 17,509 images of eight different weed species (Chinese apple, parthenium, rubber vine, prickly acacia, lantana, parkinsonia, siam and snake weed) and various off-target (or negative) plant life native to Australia has been developed for deep learning applications. CropDeep is a collection of 31,147 images of vegetables and fruits under laboratorial greenhouse conditions as well as over 49,000 annotated instances from 31 different classes developed exclusively for deep learning-based classification and detection of species [29].
Multiple data sources will make the models robust and possibilities for data fusion will surely bolster the perspectives of future research. Data from different sources can be combined to improve the overall quality of the data, thus ensuring high quality representations [245], in this line an instance of combining spectral and spatial data from HSI was found to exhibit an improved performance [204].
Success of a deep learning model shall increase for the better if the input data originates from a variety of dependent factors; in case of agricultural produce, it can be variety, origin, size or temperature. This type of model training is called multi-task learning, where simultaneous learning from several tasks of common knowledge takes place for a single model and each task is labelled as an original task [246]. The inbuilt structure of multi-task learning allows for a general understanding for the feature patterns across various tasks, all this while ignoring the noisy and irrelevant data. Another machine learning approach is transfer learning where the information picked up while learning the source tasks can be applied for target tasks, this approach improves the performance of the model as the recalibration part is avoided [35, 247].
7 Deep learning model performance and causal analysis
The deep learning model validation quantifies the expected performance from the model based on how well the model responds to unseen data. For this reason, many times, model validation is done on a data other than training data. Different approaches for generating datasets for the deep learning model development include train/validate/test percentage split, k-fold cross-validation, and time-based splits [248]. The effectiveness of deep learning algorithm is measured using a defined metric on this unseen data and is used for comparison among different models on their predictive power. In particular, for image classification using CNN architectures, the metrics like precision, accuracy, recall, F1-score along with Receiver Operating Characteristic (ROC) curve are quite useful [249]. When it comes to the object detection models, Intersection over Union (IoU), mean Average precision (mAP), Average Precision, etc., [250] are most commonly preferred for comparing the effectiveness of the models. These validation approaches and metrics are recursively applied over the data to make feature selection and optimize the hyperparameters. The learning curves of CNN models assist in identifying if the model is following the right learning trajectory, with a bias-variance trade-off. Analysing the learning curve for different models reveals the performances of each model. For example, a model with stable learning curves across the training and validation data will perform well on the unseen data. Model’s generalization, probability of overfitting, and underfitting are some key observations from the learning curves [251, 252]. Performance is an important criterion for identifying the right architecture. A model that effectively makes use of memory resources can generate quick predictions and often favour the real-time processing of data. Easier retraining is also an important criterion that helps in accommodating changes to existing models.
DL models have managed to address various complex agricultural engineering problems; but they often fail to make human-level inferences. Especially in deep neural networks with a large number of layers, causality remains a challenge, it has been observed that the model performs poorly in generalizing beyond the training data. There are limitations of reliable decision making and robust predictions with ML based on correlational pattern recognition [253,254,255], e.g. in the case of a plant stress detection or disease identification, the model gets trained on a huge volume of data assuming that it will help the model to generalize the distribution and learn suitable parameters. But in reality, the distribution often changes drastically beyond the training data. The CNN model, trained to identify plant stress in RGB images may fail to identify the stress in a new environment with different lighting conditions. The same is applicable to object detection algorithms as well. A YOLO or Faster-RCNN object detector trained to detect fruits may detect a wrong object at a slightly different angle or against new backgrounds [250, 256]. As there is uncertainty in the actual environment, it is often impossible to train model to cover all possible scenarios. This is highly relevant in agricultural engineering applications as the deep learning models directly interact with the environment. If the model does not possess causal understanding, the model fails in dealing with new situations.
A general approach for solving a real-world problem includes collection of a large pool of data, split the data into training, testing and validation and evaluate its performance by measuring the accuracy. The process is recursively performed until desired level of accuracy is reached. Even the benchmark datasets above-cited, like Fruit-360 [257], plant village [258] also fall onto same category. The deep learning models transfer learning using CNN architectures such as VGG16, AlexNet and GoogLeNet that are widely used in agricultural research are fine-tuned image classifiers that can identify new types of patterns. These models may also respond poorly with respect to changes in the environment. The main objective should be to adapt as much knowledge as possible with fewer training examples, and the model should be able to reuse the knowledge gained without continuous training in a new environment. It is worth noting that often an accurate model may not be sufficient to make informed decisions as they had been trained on statistical regularities instead of causal relations. The causal model is capable of responding to situations that the model has not encountered before.
8 Concluding remarks and way forward
This review paper is an ensemble of a wide range of applications of deep learning techniques towards inducing precision in agricultural mechanization, water management and postharvest operations. The key findings of this mammoth review work are presented in an easily referable tabular format structured around the different aspects of deep learning application for the engineering aspects of precision agriculture (Table 1). Although there are instances galore about the application of deep learning in precise pre agricultural and postharvest agricultural operations; there is enough evidences to indicate and inspire researchers for developing more creative and computationally sound deep learning models for some of the identified associated challenges.
The size of dataset is a vital parameter that gives statistical strength to the deep learning models. Collection of quality data is difficult and challenging. Complexity arises from the fact that the challenges are multi-dimensional. The data acquiring tool needs to be consistent during the course of data collection. Else, even a large dataset would not assure a robust model and predictions would be listless. This propels us to think of some effective data clean-up tools or in other words the models should be capable of handling uncertainties with concepts of, say Bayesian inference [259]. Development of efficient tools will boost the supply of quality data to the model and remove all the noise and redundancy. Overcoming quality data collection and data cleaning will lead us to the challenge of data labelling, it is an activity of considerable monotony and has direct bearing on model efficacy. Deep learning models need to learn from labelled data while, as has been mentioned earlier in this paper that public datasets are an option for deep learning modelling, but the variability in agricultural inputs and outputs arguably limits the use of these datasets for global precision agriculture purposes.
Data augmentation is a potent tool for overcoming the limitation of public dataset. This tools not only helps in increasing the volume of the data set but also ensures consistency of quality and dimension. The techniques involved in data augmentation range from simple flipping, rotating, overturning, etc., of the images [260], to amplitude and frequency deformation [261] to utilizing Gaussian noise [262]. Compilation of advantageous performance of these techniques for agricultural data is absent, although desirable.
Another way of alleviating the data dependence of deep learning models is by training it with data generated from numerical simulation by employing laws governing the physical phenomenon [263]. Such data needs to be captured for the precision pre and post agriculture operations as well; care must be observed to stifle extrapolation and observational biases [180] so as to comprehensibly exploit the advantages of deep learning. Representation of data to be used for deep learning modelling has to be consistent in format [264], besides being smooth, temporal, spatially coherent, etc. [265]. This consideration has to be kept in mind while collecting data from the precision agriculture domain.
It is foreseen that in the future there will be a dramatic proliferation of deep learning application in the field of pre and postharvest agricultural engineering operations. Interfacing the results of deep learning models with application based hardware will require understanding and efficient interpretation of the statistically outstanding performance of these models, especially for the occasions where it outperforms the human experts [180]. Failure of deep learning network has been attributed to the unease of conclusive interpretation [266]; research needs to be focused in eliminating such instances and strive to achieve self-explanatory models. Deep learning frameworks fail to encompass weather or environmental conditions while predicting parameters in the agri-ecosystem domain. It is well understood that weather is a stochastic parameter riding on complex nonlinear relationships with multiple components [240]. There is an immediate requirement of including weather information during data collection for taking full advantage of the deep learning models. There are 3D models representing the architectural traits of soil, crop and agricultural commodities across different origins and varieties, couplings of these models to a deep learning framework would substantially bolster the parameter prediction capabilities while giving due weightage to the subject traits. Effective deep learning modelling for addressing soil–water issues and yield estimation concerns can be fulfilled once the deep learning framework unites with multi modal data streams from aerial as well as ground level sensing. Hopefully approaches will continue to evolve rapidly in near future which will result in realization of the amazing possibilities that deep learning modelling beholds; wherein, computer systems shall be able to identify, classify, quantify and predict in scenarios leading us to an era of autonomous precision agricultural engineering operations.
References
Stathers AT, Holcroft D, Kitinoja L, Mvumi BM, English A, Omotilewa O, Kocher M, Ault J, Torero M (2020) A scoping review of interventions for crop postharvest loss reduction in sub-Saharan Africa and South. Nat Sustain 3:821–835. https://doi.org/10.1038/s41893-020-00622-1
World Population Prospects 2019 (UNDESA, 2019)
Alexandratos N, Bruinsma J (2012) World agriculture towards 2030/2050: the 2012 revision ESA Working Paper No. 12–03 FAO. https://www.fao.org/3/ap106e/ap106e.pdf
World Water Assessment Programme (Nations Unies) (2018) The united nations world water development report 2018 (United Nations Educational, Scientific and Cultural Organization, New York, United States) www.unwater.org/publications/world-water-development-report-2018
FAO (2017) The future of food and agriculture-Trends and challenges. Rome
Kray HA (2012) Farming for the future. The environmental sustainability of agriculture in a changing world pubdocs.worldbank.org/en/862271433768092396/Holger-Kray-RO-SustainableAg-hkray ENG.pdf
Bolten JD, Crow WT, Zhan X, Jackson TJ, Reynolds CA (2009) Evaluating the utility of remotely sensed soil moisture retrievals for operational agricultural drought monitoring. IEEE J Sel Top Appl Earth Obs Remote Sens 3:57–66. https://doi.org/10.1109/JSTARS.2009.2037163
Padhee SK, Nikam BR, Dutta S, Aggarwal SP (2017) Using satellite-based soil moisture to detect and monitor spatiotemporal traces of agricultural drought over Bundelkhand region of India. Giscience Remote Sens 54:144–166
Osakabe Y, Osakabe K, Shinozaki K, Tran LS (2014) Response of plants to water stress. Front Plant Sci 5. https://doi.org/10.3389/fpls.2014.00086
Kamarudin MH, Ismail ZH, Saidi NB (2021) Deep Learning sensor fusion in plant water stress assessment: a comprehensive review. Appl Sci 11:1403. https://doi.org/10.3390/app11041403
Chai Q, Gan Y, Zhao C, Xu HL, Waskom RM, Niu Y, Siddique KH (2016) Regulated deficit irrigation for crop production under drought stress. A Rev Agron Sustain Dev 36:3
Keswani B, Mohapatra AG, Mohanty A, Khanna A, Rodrigues JJPC, Gupta D, Albuquerque VHC (2019) Adapting weather conditions based IoT enabled smart irrigation technique in precision agriculture mechanisms. Neural Comput Appl 3131:277–292
Hakkim V, Joseph E, Gokul A, Mufeedha K (2016) Precision farming: the future of Indian agriculture. J Appl Biol Biotechnol 4:68–72
Gubbi J, Buyya R, Marusic S, Palaniswami M (2013) Internet of Things (IoT): a vision, architectural elements, and future directions. Future Gener Comput Syst 29:1645–1660
Adamchuk VI, Hummel JW, Morgan MT, Upadhyaya SK (2004) On-the-go soil sensors for precision agriculture. Comput Electron Agric 44:71–91
Ghosal S, Blystone D, Singh AK, Ganapathysubramanian B, Singh A, Sarkar S (2018) An explainable deep machine vision framework for plant stress phenotyping. Proc Natl Acad Sci USA 115(18):4613–4618
Sharma A, Jain A, Gupta P, Chowdary V (2021) Machine learning applications for precision agriculture: a comprehensive review. IEEE Access 9:4843–4873
Jin XB, Yu XH, Wang XY, Bai YT, Su TL, Kong JL (2020) Deep learning predictor for sustainable precision agriculture based on Internet of things system. Sustainability 12(4):433.
Nex F, Remondino F (2014) UAV for 3D mapping applications: a review. Appl Geomat 6:1–15
Pierce FJ, Nowak P (1999) Aspects of precision agriculture. Advances in Agronomy, vol 67. Academic, New York, NY, USA, pp 1–85
Mason-D’Croz D, Bogard JR, Sulser TB, Cenacchi N, Dunston S, Herrero M, Wiebe K (2019) Gaps between fruit and vegetable production, demand, and recommended consumption at global and national levels: an integrated modelling study. Lancet Planet Health 3:e318–e329
Scialabba NE, Hoogeveen J, Turbe A, Tubiello FN (2013) Food wastage footprints: impact on natural resources. Summary Report (FAO) pp. 26
Li Y, Jin G, Jiang X, Yi S, Tian X (2020) Non-destructive determination of soluble solids content using a multi-region combination model in hybrid citrus. Infrared Phys Technol 104:103138. https://doi.org/10.1016/j.infrared.2019.103138
Barbin DF, Badaro AT, Honorato DCB, Ida EY, Shimokomaki M (2020) Identification of Turkey meat and processed products using near infrared spectroscopy. Food Control 107:106816. https://doi.org/10.1016/j.foodcont.2019.106816
Behkami S, Zain SM, Gholami M, Khir MFA (2019) Classification of cow milk using artificial neural network developed from the spectral data of single and three detector spectrophotometers. Food Chem 294:309–315. https://doi.org/10.1016/j.foodchem.2019.05.060
Sampaio PS, Soares A, Castanho A, Almeida AS, Oliveira J, Brites C (2018) Optimization of rice amylose determination by NIR-spectroscopy using PLS chemometrics algorithms. Food Chem 242:196–204. https://doi.org/10.1016/j.foodchem.2017.09.058
Zhang X, Yang J, Lin T, Ying Y (2021) Food and agro-product quality evaluation based on spectroscopy and deep learning: a review. Trends Food Sci Technol 112:431–441
LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444. https://doi.org/10.1038/nature14539
Zheng YY, Kong JL, Jin XB, Wang XY, Su TL, Zuo M (2019) Cropdeep: the crop vision dataset for deep-learning-based classification and detection in precision agriculture. Sensors 19:1058
Xu J, Rahmatizadeh R, Boloni L, Bölöni L, Turgut D (2017) Real-time prediction of taxi demand using recurrent neural networks. IEEE Trans Intell Transp Syst 19:2572–2581
Sakar CO, Polat SO, Katircioglu M, Kastro Y (2019) Real-time prediction of online shoppers’ purchasing intention using multilayer perceptron and LSTM recurrent neural networks. Neural Comput Appl 31:6893–6908
Che Z, Purushotham S, Cho K, Sonta D, Liu Y (2018) Recurrent neural networks for multivariate time series with missing values. Sci Rep 8:6085
Salakhutdinov R, Hinton G (2009) Deep boltzmann machines. Aistats 1: 448-455
Vincent P, Larochelle H, Lajoie I, Bengio Y, Manzagol P (2010) A stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion pierre-antoine manzagol. J Mach Learn Res 11:3371–3408
Kamilaris A, Prenafeta-Boldú FX (2018) Deep learning in agriculture: a survey. Comput Electron Agric 147:70–90
Chen X, Xie L, He Y, Guan T, Zhou X, Wang B, Feng G, Yu H, Ji Y (2019) Fast and accurate decoding of Raman spectra-encoded suspension arrays using deep learning. Analyst 144:4312–4319. https://doi.org/10.1039/C9AN00913B
Ubbens J, Cieslak M, Prusinkiewicz P, Stavness I (2018) The use of plant models in deep learning: an application to leaf counting in rosette plants. Plant Methods 14:6. https://doi.org/10.1186/s13007-018-0273-z
Ubbens JR, Stavness I (2017) Deep plant phenomics: a deep learning platform for complex plant phenotyping tasks. Front Plant Sci 8:1190. https://doi.org/10.3389/fpls.2017.01190
Mohanty SP, Hughes DP, Salathé M (2016) Using deep learning for image-based plant disease detection. Front Plant Sci 7:1–7. https://doi.org/10.3389/fpls.2016.01419
Pound MP, Burgess AJ, Wilson MH, Atkinson JA, Grifths M, Jackson AS, Bulat A, Tzimiropoulos G, Wells DM, Murchie EH, Pridmore TP, French AP (2017) Deep machine learning provides state-of-the-art performance in image-based plant phenotyping. GigaScience 6:1–10. https://doi.org/10.1093/gigascience/gix083
Ren C, Dae-Kyoo Kim DK, Jeong D (2020) A survey of deep learning in agriculture: techniques and their applications. J Inf Process Syst 16:1015–1033. https://doi.org/10.3745/JIPS.04.0187
Thomas G, Balocco S, Mann D, Simundsson A, Khorasani N (2021) Intelligent agricultural machinery using deep learning. IEEE Instrum Meas Mag 24:94–100
Su WH (2020) Advanced machine learning in point spectroscopy, RGB-and hyperspectral-imaging for automatic discriminations of crops and weeds: a review. Smart Cities 3(3):767–792. https://doi.org/10.3390/smartcities3030039
Soriano-Disla JM, Janik LJ, Viscarra Rossel RA, Macdonald LM, McLaughlin MJ (2014) The performance of visible, near-, and mid-infrared reflectance spectroscopy for prediction of soil physical, chemical, and biological properties. Appl Spectrosc Rev 49(2):139–186
Padarian J, Minasny B, McBratney AB (2019) Using deep learning to predict soil properties from regional spectral data. Geoderma Reg 16:e00198
Guo Y, Liu Y, Oerlemans A, Lao S, Wu S, Lew MS (2016) Deep learning for visual understanding: a review. Neurocomputing 187:27–48. https://doi.org/10.1016/j.neucom.2015.09.116
Lee SH, Chan CS, Mayo SJ, Remagnino P (2017) How deep learning extracts and learns leaf features for plant classification. Pattern Recognit 71:1–13. https://doi.org/10.1016/j.patcog.2017.05.015
Voulodimos A, Doulamis N, Doulamis A, Protopapadakis E (2018) Deep learning for computer vision: a brief review. Comput Intell Neurosci 2018:e7068349. https://doi.org/10.1155/2018/7068349
Zhou L, Zhang C, Liu F, Qiu Z, He Y (2019) Application of deep learning in food: a review. Compr Rev Food Sci Food Saf 18:1793–1811. https://doi.org/10.1111/1541-4337.12492
Abiodun OI, Jantan A, Omolara AE, Dada KV, Mohamed NA, Arshad H (2018) State-of-the-art in artificial neural network applications: a survey. Heliyon 4:e00938. https://doi.org/10.1016/j.heliyon.2018.e00938
Huang SC, Le TH (2021) Multi-category classification problem. In: Huang SC, Le TH (eds) Principles and Labs for Deep Learning. Academic Press, pp 81–116. https://doi.org/10.1016/B978-0-323-90198-7.00005-7
LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86:2278–2324
Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with deep convolutional neural networks. In: Pereira F, Burges CJC, Bottou L, Weinberger KQ (eds) Advances in neural information processing systems 25. Curran Associates Inc, New York, pp 1097–1105
Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. ArXiv14091556 Cs
Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. pp 2818–2826
Szegedy C, Liu W, Jia Y, (2014) Going deeper with convolutions. ArXiv14094842 Cs
He K, Zhang X, Ren S, Sun J (2015) Deep residual learning for image recognition. ArXiv151203385 Cs
Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) MobileNets: efficient convolutional neural networks for mobile vision applications. arXiv
Chollet F (2017) Xception: Deep learning with depthwise separable convolutions. ArXiv161002357 Cs
Huang G, Liu Z, van der Maaten L, Weinberger KQ (2018) Densely connected convolutional networks. ArXiv160806993 Cs
Redmon J, Farhadi A (2018) YOLOv3: An incremental improvement. ArXiv180402767 Cs
Li Z, Liu F, Yang W, Peng S, Zhou J (2021) A survey of convolutional neural networks: analysis, applications, and prospects. IEEE Trans Neural Netw Learn Syst 1–21 https://doi.org/10.1109/TNNLS.2021.3084827
De Mulder W, Bethard S, Moens M-F (2015) A survey on the application of recurrent neural networks to statistical language modeling. Comput Speech Lang 30:61–98. https://doi.org/10.1016/j.csl.2014.09.005
Zhang G, Liu Y, Jin X (2020) A survey of autoencoder-based recommender systems. Front Comput Sci 14:430–450. https://doi.org/10.1007/s11704-018-8052-6
Zhao C, Zhang Y, Du J, Guo X, Wen W, Gu S, Wang J, Fan J (2019) Crop phenomics: current status and perspectives. Front Plant Sci 10:714
Zhao ZQ, Zheng P, Xu ST, Wu X (2019) Object detection with deep learning: a review. IEEE Trans Neural Netw Learn Syst 30(11):3212–3232
Jiang Y, Li C (2020) Convolutional neural networks for image-based high-throughput plant phenotyping: a review. Plant Phenomics, 2020
Azimi S, Kaur T, Gandhi TK (2021) A deep learning approach to measure stress level in plants due to nitrogen deficiency. Meas 173:108650. https://doi.org/10.1016/j.measurement.2020.108650
Wang H, Seaborn T, Wang Z, Caudill CC, Link TE (2021) Modeling tree canopy height using machine learning over mixed vegetation landscapes. Int J Appl Earth Obs Geoinf 101:102353. https://doi.org/10.1016/j.jag.2021.102353
Šulc M, Matas J (2017) Fine-grained recognition of plants from images. Plant Methods 13(1):1–14. https://doi.org/10.1186/s13007-017-0265-4
Yasrab R, Atkinson JA, Wells DM, French AP, Pridmore TP, Pound MP (2019) RootNav 2.0: Deep learning for automatic navigation of complex plant root architectures. GigaScience 8(11):giz123. https://doi.org/10.1093/gigascience/giz123
O’Mahony N, Campbell S, Carvalho A, Harapanahalli S, Hernandez GV, Krpalkova L, Riordan D, Walsh J (2019, April) Deep learning vs. traditional computer vision. Sci Info Conf (pp. 128-144). Springer, Chamhttps://doi.org/10.1007/978-3-030-17795-9_10
Gao Z, Luo Z, Zhang W, Lv Z, Xu Y (2020) Deep learning application in plant stress imaging: a review. Agri Eng 2(3):430–446. https://doi.org/10.3390/agriengineering2030029
Dias C A, Bueno J C, Borges E N, Botelho S S, Dimuro G P, Lucca G, Fernandéz J, Bustince H, Junior P L J D (2018) Using the Choquet integral in the pooling layer in deep learning networks. In North american fuzzy information processing society annual conference (pp. 144–154). Springer, Cham. https://doi.org/10.1007/978-3-319-95312-0_13
Qiufeng W, Zhang K, Meng J (2019) Identification of soybean leaf diseases via deep learning. J Inst Eng (India) Series A 100(4):659–666. https://doi.org/10.1007/s40030-019-00390-y
Wu J, Yang G, Yang X, Xu B, Han L, Zhu Y (2019) Automatic counting of in situ rice seedlings from UAV images based on a deep fully convolutional neural network. Remote Sens 11(6):691
Jiang Y, Li C, Xu R, Sun S, Robertson JS, Paterson AH (2020) DeepFlower: a deep learning-based approach to characterize flowering patterns of cotton plants in the field. Plant Methods 16(1):1–17
Wu W, Liu T, Zhou P, Yang T, Li C, Zhong X, Sun C, Liu S, Guo W (2019) Image analysis-based recognition and quantification of grain number per panicle in rice. Plant Methods 15(1):1–14
Lin Z, Guo W (2021) Cotton stand counting from unmanned aerial system imagery using Mobilenet and Centernet deep learning models. Remote Sens 13(14):2822
Lu H, Cao Z (2020) Tasselnetv2+: A fast implementation for high-throughput plant counting from high-resolution RGB imagery. Front Plant Sci 11:1929
Madec S, Jin X, Lu H, De Solan B, Liu S, Duyme F, Heritier E, Baret F (2019) Ear density estimation from high resolution RGB imagery using deep learning technique. Agric Meteorol 264:225–234
Liu Y, Cen C, Che Y, Ke R, Ma Y, Ma Y (2020) Detection of maize tassels from UAV RGB imagery with faster R-CNN. Remote Sens 12(2):338
Jiang H, Hu H, Zhong R, Xu J, Xu J, Huang J, Wang S, Ying Y, Lin T (2020) A deep learning approach to conflating heterogeneous geospatial data for corn yield estimation: a case study of the US Corn Belt at the county level. Global Change Boil 26(3):1754–1766
Khaki S, Wang L, Archontoulis SV (2020) A CNN-RNN framework for crop yield prediction. Front in Plant Sci 10:1750
Shahhosseini M, Hu G, Huber I, Archontoulis SV (2021) Coupling machine learning and crop modeling improves crop yield prediction in the US Corn Belt. Sci Rep 11(1):1–15
Sobayo R, Wu HH, Ray R, Qian L (2018, April) Integration of convolutional neural network and thermal images into soil moisture estimation. In 2018 1st international conference on data intelligence and security (ICDIS) pp. 207–210. IEEE. https://doi.org/10.1109/ICDIS.2018.00041
Chandel NS, Chakraborty SK, Rajwade YA, Dubey K, Tiwari MK, Jat D (2021) Identifying crop water stress using deep learning models. Neural Comput Appl 33(10):5353–5367. https://doi.org/10.1007/s00521-020-05325-4
Fuentes A, Yoon S, Kim SC, Park DS (2017) A robust deep-learning-based detector for real-time tomato plant diseases and pests recognition. Sens 17(9):2022. https://doi.org/10.3390/s17092022
An J, Li W, Li M, Cui S, Yue H (2019) Identification and classification of maize drought stress using deep convolutional neural network. Symmetry 11(2):256. https://doi.org/10.3390/sym11020256
Aich S, Stavness I (2017) Leaf counting with deep convolutional and deconvolutional networks. In proceedings of the IEEE international conference on computer vision workshops, pp. 2080–2089
Kaneda Y, Shibata S, Mineno H (2017) Multi-modal sliding window-based support vector regression for predicting plant water stress. Knowl Based Syst 134:135–148. https://doi.org/10.1016/j.knosys.2017.07.028
Anami BS, Malvade NN, Palaiah S (2020) Deep learning approach for recognition and classification of yield affecting paddy crop stresses using field images. AI Agri 4:12–20. https://doi.org/10.1016/j.aiia.2020.03.001
Grinblat GL, Uzal LC, Larese MG, Granitto PM (2016) Deep learning for plant identification using vein morphological patterns. Comput Electron Agric 127:418–424. https://doi.org/10.1016/j.compag.2016.07.003
Lu H, Cao Z, Xiao Y, Zhuang B, Shen C (2017) TasselNet: counting maize tassels in the wild via local counts regression network. Plant Methods 13(1):1–17. https://doi.org/10.1186/s13007-017-0224-0
Zou H, Lu H, Li Y, Liu L, Cao Z (2020) Maize tassels detection: a benchmark of the state of the art. Plant Methods 16(1):1–15
Alzadjali A, Alali MH, Sivakumar ANV, Deogun JS, Scott S, Schnable JC, Shi Y (2021) Maize tassel detection from UAV imagery using deep learning. Front Robot AIhttps://doi.org/10.3389/frobt.2021.600410
Baweja HS, Parhar T, Mirbod O, Nuske S (2018) Stalknet: A deep learning pipeline for high-throughput measurement of plant stalk count and stalk width. In Field and service robotics (pp. 271-284). Springer, Cham
Malambo L, Popescu S, Ku NW, Rooney W, Zhou T, Moore S (2019) A deep learning semantic segmentation-based approach for field-level sorghum panicle counting. Remote Sens 11(24):2939
Teramoto S, Uga Y (2020) A deep learning-based phenotypic analysis of rice root distribution from field images. Plant Phenomics, 2020
Brahimi M, Boukhalfa K, Moussaoui A (2017) Deep learning for tomato diseases: classification and symptoms visualization. Appl Artif Intell 31(4):299–315
Lu J, Hu J, Zhao G, Mei F, Zhang C (2017) An in-field automatic wheat disease diagnosis system. Comput Electron Agric 142:369–379. https://doi.org/10.1016/j.compag.2017.09.012
Zhang X, Qiao Y, Meng F, Fan C, Zhang M (2018) Identification of maize leaf diseases using improved deep convolutional neural networks. IEEE Access 6:30370–30377. https://doi.org/10.1109/ACCESS.2018.2844405
Lu Y, Yi S, Zeng N, Liu Y, Zhang Y (2017) Identification of rice diseases using deep convolutional neural networks. Neurocomputing 267:378–384. https://doi.org/10.1016/j.neucom.2017.06.023
Coulibaly S, Kamsu-Foguem B, Kamissoko D, Traore D (2019) Deep neural networks with transfer learning in millet crop images. Comput Ind 108:115–120. https://doi.org/10.1016/j.compind.2019.02.003
Militante SV, Gerardo BD, Medina RP (2019) Sugarcane disease recognition using deep learning. In 2019 IEEE Eurasia conference on IOT, communication and engineering (ECICE) pp. 575–578. IEEE. https://doi.org/10.1109/ECICE47484.2019.8942690
Avila-George H, Valdez-Morones T, Pérez-Espinosa H, Acevedo-Juárez B, Castro W (2018) Using artificial neural networks for detecting damage on tobacco leaves caused by blue mold. Int J Adv Comput Sci Appl 9(8):579–583
Jenifa A, Ramalakshmi R, Ramachandran V (2019) Cotton leaf disease classification using deep convolution neural network for sustainable cotton production. In 2019 IEEE international conference on clean energy and energy efficient electronics circuit for sustainable development (INCCES) pp. 1–3. IEEE. https://doi.org/10.1109/INCCES47820.2019.9167715
Hasan MZ, Ahamed MS, Rakshit A, Hasan KZ (2019) Recognition of Jute diseases by leaf image classification using convolutional neural network. In 2019 10th international conference on computing, communication and networking technologies (ICCCNT) pp. 1–5. IEEE. https://doi.org/10.1109/ICCCNT45670.2019.8944907
Esgario JG, Krohling RA, Ventura JA (2020) Deep learning for classification and severity estimation of coffee leaf biotic stress. Comput Electron Agric 169:105162
Barbedo JGA (2019) Plant disease identification from individual lesions and spots using deep learning. Biosyst Eng 180:96–107. https://doi.org/10.1016/j.biosystemseng.2019.02.002
Hu G, Yang X, Zhang Y, Wan M (2019) Identification of tea leaf diseases by using an improved deep convolutional neural network. Sustain Comput Infor Sys 24:100353. https://doi.org/10.1016/j.suscom.2019.100353
Rangarajan AK, Purushothaman R, Ramesh A (2018) Tomato crop disease classification using pre-trained deep learning algorithm. Proc Comput Sci 133:1040–1047. https://doi.org/10.1016/j.procs.2018.07.070
Rangarajan AK, Raja P (2020) Automated disease classification in (Selected) agricultural crops using transfer learning. Automatika: časopiszaautomatiku, mjerenje, elektroniku, računarstvoikomunikacije, 61(2): 260–272. https://doi.org/10.1080/00051144.2020.1728911
Liu B, Zhang Y, He D, Li Y (2018) Identification of apple leaf diseases based on deep convolutional neural networks. Symmetry 10(1):11. https://doi.org/10.3390/sym10010011
Ji M, Zhang L, Wu Q (2020) Automatic grape leaf diseases identification via United Model based on multiple convolutional neural networks. Inf Process Agric 7(3):418–426. https://doi.org/10.1016/j.inpa.2019.10.003
Ferentinos KP (2018) Deep learning models for plant disease detection and diagnosis. Comput Electron Agric 145:311–318
Too EC, Yujian L, Njuki S, Yingchun L (2019) A comparative study of fine-tuning deep learning models for plant disease identification. Comput Electron Agric 161:272–279. https://doi.org/10.1016/j.compag.2018.03.032
Latif G, Alghazo J, Maheswar R, Vijayakumar V, Butt M (2020) Deep learning based intelligence cognitive vision drone for automatic plant diseases identification and spraying. J Intell Fuzzy Syst 39(6):8103–8114. https://doi.org/10.3233/JIFS-189132
Amara J, Bouaziz B, Algergawy A (2017) A deep learning-based approach for banana leaf diseases classification. Datenbanksystemefür Business, Technologie und Web (BTW 2017)-Workshopband
Agrios GN (2005) Chapter fourteen-plant diseases caused by viruses. Agrios GNBT-PP, 5th (ed.) Academic Press, San Diego, CA, USA, pp. 723–824
Türkoğlu M, Hanbay D (2019) Plant disease and pest detection using deep learning-based features. Turk J Electr Eng Comput Sci 27(3):1636–1651. https://doi.org/10.3906/elk-1809-181
Wu L, Liu Z, Bera T, Ding H, Langley DA, Jenkins-Barnes A, Furlanello C, Maggio V, Tong W, Xu J (2019) A deep learning model to recognize food contaminating beetle species based on elytra fragments. Comput Electron Agric 166:105002
Kang H, Chen C (2020) Fast implementation of real-time fruit detection in apple orchards using deep learning. Comput Electron Agric 168:105108. https://doi.org/10.1016/j.compag.2019.105108
Onishi Y, Yoshida T, Kurita H, Fukao T, Arihara H, Iwai A (2019) An automated fruit harvesting robot by using deep learning. Robomech J 6(1):1–8. https://doi.org/10.1186/s40648-019-0141-2
Sa I, Ge Z, Dayoub F, Upcroft B, Perez T, McCool C (2016) Deepfruits: a fruit detection system using deep neural networks. Sens 16(8):1222. https://doi.org/10.3390/s16081222
Yu Y, Zhang K, Yang L, Zhang D (2019) Fruit detection for strawberry harvesting robot in non-structural environment based on Mask-RCNN. Comput Electron Agric 163:104846. https://doi.org/10.1016/j.compag.2019.06.001
Afonso M, Fonteijn H, Fiorentin FS, Lensink D, Mooij M, Faber N, Polder G, Wehrens R (2020) Tomato fruit detection and counting in greenhouses using deep learning. Front Plant Sci 11:1759. https://doi.org/10.3389/fpls.2020.571299
Yamamoto K, Guo W, Yoshioka Y, Ninomiya S (2014) On plant detection of intact tomato fruits using image analysis and machine learning methods. Sens 14(7):12191–12206. https://doi.org/10.3390/s140712191
Rahnemoonfar M, Sheppard C (2017) Deep count: fruit counting based on deep simulated learning. Sens 17(4):905
Blok PM, van Henten EJ, van Evert FK, Kootstra G (2021) Image-based size estimation of broccoli heads under varying degrees of occlusion. Biosyst Eng 208:213–233. https://doi.org/10.1016/j.biosystemseng.2021.06.001
Muresan H, Oltean M (2018) Fruit recognition from images using deep learning. Acta Univ Sapientiae Informatica 10:26–42
Bakhshipour A, Jafari A (2018) Evaluation of support vector machine and artificial neural networks in weed detection using shape features. Comput Electron Agric 145:153–160. https://doi.org/10.1016/j.compag.2017.12.032
Wang A, Zhang W, Wei X (2019) A review on weed detection using ground-based machine vision and image processing techniques. Comput Electron Agric 158:226–240. https://doi.org/10.1016/j.compag.2019.02.005
Dankhara F, Patel K, Doshi N (2019) Analysis of robust weed detection techniques based on the Internet of Things (IoT). Proc Comput Sci 160:696–701. https://doi.org/10.1016/j.procs.2019.11.025
Vrindts E, De Baerdemaeker J, Ramon H (2002) Weed detection using canopy reflection. Preci Agric 3(1):63–80. https://doi.org/10.1023/A:1013326304427
Hasan AM, Sohel F, Diepeveen D, Laga H, Jones MG (2021) A survey of deep learning techniques for weed detection from images. Comput Electron Agric 184:106067
Bah MD, Hafiane A, Canals R (2018) Deep learning with unsupervised data labeling for weed detection in line crops in UAV images. Remote sens 10(11):1690. https://doi.org/10.3390/rs10111690
Ferreira AS, Freitas DM, da Silva GG, Pistori H, Folhes MT (2017) Weed detection in soybean crops using ConvNets. Comput Electron Agric 143:314–324. https://doi.org/10.1016/j.compag.2017.10.027
Yu J, Sharpe SM, Schumann AW, Boyd NS (2019) Deep learning for image-based weed detection in turfgrass. Eur J Agron 104:78–84. https://doi.org/10.1016/j.eja.2019.01.004
Olsen A, Konovalov DA, Philippa B, Ridd P, Wood JC, Johns J, Banks W, Girgenti B, Kenny O, Whinney J, Azghadiand MR, Calvert B (2019) DeepWeeds: a multiclass weed species image dataset for deep learning. Sci Rep 9(1):1–12. https://doi.org/10.1038/s41598-018-38343-3
Osorio K, Puerto A, Pedraza C, Jamaica D, Rodríguez L (2020) A deep learning approach for weed detection in lettuce crops using multispectral images. Agri Eng 2(3):471–488. https://doi.org/10.3390/agriengineering2030032
Sivakumar ANV, Li J, Scott S, Psota E, Jhala AJ, Luck JD, Shi Y (2020) Comparison of object detection and patch-based classification deep learning models on mid- to late-season weed detection in UAV imagery. Remote Sens 12(13):2136. https://doi.org/10.3390/rs12132136
Yu J, Schumann AW, Cao Z, Sharpe SM, Boyd NS (2019) Weed detection in perennial ryegrass with deep learning convolutional neural network. Front Plant Sci 10:1422. https://doi.org/10.3389/fpls.2019.01422
Schulz S, Becker R, Cerda JC, Usman M, aus der Beek T, Merz R, Schüth C (2021) Estimating water balance components in irrigated agriculture using a combined approach of soil moisture and energy balance monitoring, and numerical modelling. Hydrol Process, 35(3), e14077https://doi.org/10.1002/hyp.14077
Chen Z, Zhu Z, Jiang H, Sun S (2020) Estimating daily reference evapotranspiration based on limited meteorological data using deep learning and classical machine learning methods. J Hydrol 591:125286. https://doi.org/10.1016/j.jhydrol.2020.125286
Djaman K, Balde AB, Sow A, Muller B, Irmak S, N’Diaye MK, Manneh B, Moukoumbi YD, Futakuchi K, Saito K (2015) Evaluation of sixteen reference evapotranspiration methods under sahelian conditions in the Senegal River Valley. J Hydrol Reg Stud 3:139–159. https://doi.org/10.1016/j.ejrh.2015.02.002
Karandish F, Šimůnek J (2016) A comparison of numerical and machine-learning modeling of soil water content with limited input data. J Hydrol 543:892–909. https://doi.org/10.1016/j.jhydrol.2016.11.007
Gupta SC, Larson WE (1979) Estimating soil water retention characteristics from particle size distribution, organic matter percent, and bulk density. Water Resour Res 15(6):1633–1635. https://doi.org/10.1029/WR015i006p01633
Rawls WJ, Brakensiek DL, Saxtonn KE (1982) Estimation of soil water properties. Trans ASAE 25(5):1316–1320. https://doi.org/10.13031/2013.33720
Saxton KE, Rawls WJ (2006) Soil water characteristic estimates by texture and organic matter for hydrologic solutions. Soil Sci Soc Am J 70:1569–1578. https://doi.org/10.2136/sssaj2005.0117
Li Q, Wang Z, Shangguan W, Li L, Yao Y, Yu F (2021) Improved daily SMAP satellite soil moisture prediction over China using deep learning model with transfer learning. J Hydrol 600:126698. https://doi.org/10.1016/j.jhydrol.2021.126698
Quinlan JR (1992) Learning with continuous classes. In: Australian joint conference on artificial intelligence. pp. 343–348
Tabari H, Kisi O, Ezani A, Hosseinzadeh Talaee P (2012) SVM, ANFIS, regression and climate based models for reference evapotranspiration modeling using limited climatic data in a semi-arid highland environment. J Hydrol 444–445:78–89. https://doi.org/10.1016/j.jhydrol.2012.04.007
Wen X, Si J, He Z, Wu J, Shao H, Yu H (2015) Support-Vector-Machine-based models for modeling daily reference evapotranspiration with limited climatic data in extreme arid regions. Water Resour Manag 29:3195–3209. https://doi.org/10.1007/s11269-015-0990-2
Yaseen ZM, Jaafar O, Deo RC, Kisi O, Adamowski J, Quilty J, El-Shafie A (2016) Stream-flow forecasting using extreme learning machines: A case study in a semi-arid region in Iraq. J Hydrol 542:603–614. https://doi.org/10.1016/j.jhydrol.2016.09.035
Pereira LS, Allen RG, Smith M, Raes D (2015) Crop evapotranspiration estimation with FAO56: past and future. Agric Water Manag 147:4–20. https://doi.org/10.1016/j.agwat.2014.07.031
Ferreira LB, Da Cunha FF (2020) New approach to estimate daily reference evapotranspiration based on hourly temperature and relative humidity using machine learning and deep learning. Agric Water Manag 234:106113. https://doi.org/10.1016/j.agwat.2020.106113
Vulova S, Meier F, Rocha AD, Quanz J, Nouri H, Kleinschmit B (2021) Modeling urban evapotranspiration using remote sensing, flux footprints, and artificial intelligence. Sci Total Environ 786:147293. https://doi.org/10.1016/j.scitotenv.2021.147293
Saggi MK, Jain S (2019) Reference evapotranspiration estimation and modeling of the Punjab Northern India using deep learning. Comput Electron Agric 156:387–398. https://doi.org/10.1016/j.compag.2018.11.031
Roy DK (2021) Long short-term memory networks to predict one-step ahead reference evapotranspiration in a subtropical climatic zone. Environ Process 8:911–941. https://doi.org/10.1007/s40710-021-00512-4
Alibabaei K, Gaspar PD, Lima TM (2021) Modeling soil water content and reference evapotranspiration from climate data using deep learning method. Appl Sci 11(11):5029. https://doi.org/10.3390/app11115029
Yin J, Deng Z, Ines AVM, Wu J, Rasu E (2020) Forecast of short-term daily reference evapotranspiration under limited meteorological variables using a hybrid bi-directional long short-term memory model (Bi-LSTM). Agric Water Manag 242:106386. https://doi.org/10.1016/j.agwat.2020.106386
Afzaal H, Farooque AA, Abbas F, Acharya B, Esau T (2020) Computation of evapotranspiration with artificial intelligence for precision water resource management. Appl Sci 10(5):1621. https://doi.org/10.3390/app10051621
Proias GE, Gravalos IO, Papageorgiou EL, Poczęta KA, Sakellariou-Makrantonaki MA (2020) Forecasting reference evapotranspiration using time lagged recurrent neural network. Wseas Trans Environ Dev 16:699–707. https://doi.org/10.37394/232015.2020.16.72
Chen Z, Sun S, Wang Y, Wang Q, Zhang X (2020) Temporal convolution-network-based models for modeling maize evapotranspiration under mulched drip irrigation. Comput Electron Agric 169:105206. https://doi.org/10.1016/j.compag.2019.105206
Granata F, Di Nunno F (2021) Forecasting evapotranspiration in different climates using ensembles of recurrent neural networks. Agric Water Manag 255:107040. https://doi.org/10.1016/j.agwat.2021.107040
Sattari MT, Apaydin H, Band SS, Mosavi A, Prasad R (2021) Comparative analysis of kernel-based versus ANN and deep learning methods in monthly reference evapotranspiration estimation. Hydrol Earth Syst Sci 25:603–618. https://doi.org/10.5194/hess-25-603-2021
Entekhabi D, Rodriguez-Iturbe I, Castelli F (1996) Mutual interaction of soil moisture state and atmospheric processes. J Hydrol 184:3–17. https://doi.org/10.1016/0022-1694(95)02965-6
Botula YD, Cornelis WM, Baert G, Van Ranst E (2012) Evaluation of pedotransfer functions for predicting water retention of soils in lower congo (D.R. Congo). Agric Water Manag 111:1–10. https://doi.org/10.1016/j.agwat.2012.04.006
Mohanty M, Sinha NK, Painuli DK, Bandyopadhyay KK, Hati KM, Sammi Reddy K, Chaudhary RS (2015) Modelling soil water contents at field capacity and permanent wilting point using artificial neural network for Indian soils. Natl Acad Sci Lett 38:373–377. https://doi.org/10.1007/s40009-015-0358-4
Drusch M (2007) Initializing numerical weather prediction models with satellite-derived surface soil moisture: data assimilation experiments with ECMWF’s integrated forecast system and the TMI soil moisture data set. J Geophys Res Atmos 112(D3):D03102. https://doi.org/10.1029/2006JD007478
Koster RD, Dirmeyer PA, Guo Z, Bonan G, Chan E, Cox P, Gordon CT, Kanae S, Kowalczyk E, Lawrence D, Liu P, Lu CH, Malyshev S, McAvaney B, Mitchell K, Mocko D, Oki T, Oleson K, Pitman A, Sud YC, Taylor CM, Verseghy D, Vasic R, Xue Y, Yamada T (2004) Regions of strong coupling between soil moisture and precipitation. Science 305(5687):1138–1140. https://doi.org/10.1126/science.1100217
Seneviratne SI, Corti T, Davin EL, Hirschi M, Jaeger EB, Lehner I, Orlowsky B, Teuling AJ (2010) Investigating soil moisture–climate interactions in a changing climate: a review. Earth-Science Rev 99(3–4):125–161. https://doi.org/10.1016/j.earscirev.2010.02.004
Cai W, Cowan T, Briggs P, Raupach M (2009) Rising temperature depletes soil moisture and exacerbates severe drought conditions across southeast Australia. Geophys Res Lett 36:L21709. https://doi.org/10.1029/2009GL040334
Norbiato D, Borga M, Degli Esposti S, Gaume E, Anquetin S (2008) Flash flood warning based on rainfall thresholds and soil moisture conditions: an assessment for gauged and ungauged basins. J Hydrol 362(3–4):274–290. https://doi.org/10.1016/j.jhydrol.2008.08.023
Wanders N, Karssenberg D, De Roo A, De Jong SM, Bierkens MFP (2014) The suitability of remotely sensed soil moisture for improving operational flood forecasting. Hydrol Earth Syst Sci 18:2343–2357. https://doi.org/10.5194/hess-18-2343-2014
Pastor J, Post WM (1986) Influence of climate, soil moisture, and succession on forest carbon and nitrogen cycles. Biogeochem 2:3–27. https://doi.org/10.1007/BF02186962
Bolten JD, Crow WT, Jackson TJ, Zhan X, Reynolds CA (2010) Evaluating the utility of remotely sensed soil moisture retrievals for operational agricultural drought monitoring. IEEE J Sel Top Appl Earth Obs Remote Sens 3(1):57–66. https://doi.org/10.1109/JSTARS.2009.2037163
Wei J, Dirmeyer PA, Wisser D, Bosilovich MG, Mocko DM (2013) Where does the irrigation water go? an estimate of the contribution of irrigation to precipitation using MERRA. J Hydrometeorol 14(1):275–289. https://doi.org/10.1175/JHM-D-12-079.1
Reichstein M, Camps-Valls G, Stevens B, Jung M, Denzler J, Carvalhais N (2019) Deep learning and process understanding for data-driven earth system science. Nature 566(7743):195–204. https://doi.org/10.1038/s41586-019-0912-1
Song X, Zhang G, Liu F, Li D, Zhao Y, Yang J (2016) Modeling spatio-temporal distribution of soil moisture by deep learning-based cellular automata model. J Arid Land 8:734–748. https://doi.org/10.1007/s40333-016-0049-0
Tseng D, Wang D, Chen C, Miller L, Song W, Viers J, Vougioukas S, Carpin S, Ojea JA, Goldberg K (2018) Towards automating precision irrigation: deep learning to infer local soil moisture conditions from synthetic aerial agricultural images. In: IEEE international conference on automation science and engineering. IEEE computer society, pp. 284–291 https://doi.org/10.1109/COASE.2018.8560431
Yamaç SS, Şeker C, Negiş H (2020) Evaluation of machine learning methods to predict soil moisture constants with different combinations of soil input data for calcareous soils in a semi arid area. Agric Water Manag 234:106121. https://doi.org/10.1016/j.agwat.2020.106121
Yu J, Tang S, Zhangzhong L, Zheng W, Wang L, Wong A, Xu L (2020) A deep learning approach for multi-depth soil water content prediction in summer maize growth period. IEEE Access 8:199097–199110. https://doi.org/10.1109/ACCESS.2020.3034984
Yu J, Zhang X, Xu L, Dong J, Zhangzhong L (2021) A hybrid CNN-GRU model for predicting soil moisture in maize root zone. Agric Water Manag 245:106649. https://doi.org/10.1016/j.agwat.2020.106649
Njoku EG, Jackson TJ, Lakshmi V, Member S, Chan TK, Nghiem SV (2003) Soil moisture retrieval from AMSR-E. IEEE Trans Geosci Remote Sens 41(2):215–229. https://doi.org/10.1109/TGRS.2002.808243
Wagner W, Lemoine G, Rott H (1999) A method for estimating soil moisture from ers scatterometer and soil data. RSEnv 70(2):191–207. https://doi.org/10.1016/S0034-4257(99)00036-X
Kerr YH, Waldteufel P, Wigneron JP, Delwart S, Cabot F, Boutin J, Escorihuela MJ, Font J, Reul N, Gruhier C, Juglea SE, Drinkwater MR, Hahne A, Martin-Neira M, Mecklenburg S (2010) The SMOS L: New tool for monitoring key elements ofthe global water cycle. Proc IEEE 98(5):666–687. https://doi.org/10.1109/JPROC.2010.2043032
Entekhabi D, Njoku EG, O’Neill PE, Kellogg KH, Crow WT, Edelstein WN, Entin JK, Goodman SD, Jackson TJ, Johnson J, Kimball J, Piepmeier JR, Koster RD, Martin N, McDonald KC, Moghaddam M, Moran S, Reichle R, Shi JC, Spencer MW, Thurman SW, Tsang L, Van Zyl J (2010) The soil moisture active passive (SMAP) mission. Proc IEEE 98(5):704–716. https://doi.org/10.1109/JPROC.2010.2043918
Liang S, Wang J (2019) Advanced remote sensing: terrestrial information extraction and applications. Adv Remote Sens Terr Inf Extr Appl 1–986
Liou YA, Liu SF, Wang WJ (2001) Retrieving soil moisture from simulated brightness temperatures by a neural network. IEEE Trans Geosci Remote Sens 39(8):1662–1672. https://doi.org/10.1109/36.942544
Fang K, Shen C, Kifer D, Yang X (2017) Prolongation of SMAP to spatiotemporally seamless coverage of continental U.S. using a deep learning neural network. Geophys Res Lett 44:11030–11039. https://doi.org/10.1002/2017GL075619
Zhang D, Zhang W, Huang W, Hong Z, Meng L (2017) Upscaling of surface soil moisture using a deep learning model with VIIRS RDR. ISPRS Int J Geo-Inf 6(5):130. https://doi.org/10.3390/ijgi6050130
Lee CS, Sohn E, Park JD, Jang JD (2019) Estimation of soil moisture using deep learning based on satellite data: a case study of South Korea. GISci Remote Sens 56(1):43–67. https://doi.org/10.1080/15481603.2018.1489943
Wang W, Zhang C, Li F, Song J, Li P, Zhang Y (2020) Extracting soil moisture from fengyun-3D medium resolution spectral imager-II imagery by using a deep belief network. J Meteorol Res 344(34):748–759. https://doi.org/10.1007/s13351-020-9191-x
Masrur Ahmed AA, Deo RC, Raj N, Ghahramani A, Feng Q, Yin Z, Yang L (2021) Deep learning forecasts of soil moisture: convolutional neural network and gated recurrent unit models coupled with satellite-derived MODIS, observations and synoptic-scale climate index data. Remote Sens 13(4):554. https://doi.org/10.3390/rs13040554
Porat R, Lichter A, Terry LA, Harker R, Buzby J (2018) Postharvest losses of fruit and vegetables during retail and in consumers’ homes: quantifications, causes, and means of prevention. Postharvest Biol Technol 139:135–149
Chakraborty SK, Mahanti NK, Mansoori SM, Tripathi MK, Kotwaliwale N, Jayas DS (2021) Non-destructive classification and prediction of aflatoxin-B1 concentration in maize kernels using Vis-NIR (400–1000 nm) hyperspectral imaging. J Food Sci Technol 58:437–450
Bureau S, Cozzolino D, Clark CJ (2019) Contributions of Fourier-transform mid infrared (FT-MIR) spectroscopy to the study of fruit and vegetables: a review. Postharvest Biol Technol 148:1–14
Li L, Peng Y, Li Y, Chao K, Dhakal S (2020, April) Online detection of tomato internal and external quality attributes by an optical sensing system. In sensing for agriculture and food quality and safety XII (Vol. 11421, p. 114210T). International society for optics and photonics
Ni C, Wang D, Tao Y (2019) Variable weighted convolutional neural network for the nitrogen content quantization of Masson pine seedling leaves with near-infrared spectroscopy. Spectrochim Acta A Mol Biomol Spectrosc Spectrochim Acta A 209:32–39
Rong D, Wang H, Ying Y, Zhang Z, Zhang Y (2020) Peach variety detection using VIS-NIR spectroscopy and deep learning. Comput Electron Agric 175:105553
Acquarelli J, Van Laarhoven T, Gerretzen J, Tran TN, Buydens LMC, Marchiori E (2017) Convolutional neural networks for vibrational spectroscopic data analysis. Anal Chim Acta 954:22–31
Liu Y, Zhou S, Han W, Liu W, Qiu Z, Li C (2019) Convolutional neural network for hyperspectral data analysis and effective wavelengths selection. Anal Chim Acta 1086:46–54
Bai Y, Xiong Y, Huang J, Zhou J, Zhang B (2019) Accurate prediction of soluble solid content of apples from multiple geographical regions by combining deep learning with spectral fingerprint features. Postharvest Biol Technol 156:110943
Qiu Z, Chen J, Zhao Y, Zhu S, He Y, Zhang C (2018) Variety identification of single rice seed using hyperspectral imaging combined with convolutional neural network. Appl Sci 8(2):212
Nie P, Zhang J, Feng X, Yu C, He Y (2019) Classification of hybrid seeds using near-infrared hyperspectral imaging technology combined with deep learning. Sens Actuators B Chem 296:126630
Wu N, Zhang Y, Na R, Mi C, Zhu S, He Y (2019) Variety identification of oat seeds using hyperspectral imaging: Investigating the representation ability of deep convolutional neural network. RSC Adv 9(22):12635–12644
Wu N, Zhang C, Bai X, Du X, He Y (2018) Discrimination of Chrysanthemum varieties using hyperspectral imaging combined with a deep convolutional neural network. Molecules 23:2831
Zhang X, Xu J, Lin T, Ying Y (2018) Convolutional neural network based classification analysis for near infrared spectroscopic sensing. In: 2018 ASABE international meeting (pp. 1–6). ASABE
Franczyk B, Hernes M, Kozierkiewicz A, Kozina A, Pietranik M, Roemer I, Schieck M (2020) Deep learning for grape variety recognition. Proc Comput Sci 176:1211–1220
Yu X, Lu H, Wu D (2018) Development of deep learning method for predicting firmness and soluble solid content of postharvest Korla fragrant pear using Vis/NIR hyperspectral reflectance imaging. Postharvest Biol Technol 141:39–49
Malek S, Melgani F, Bazi Y (2018) One-dimensional convolutional neural networks for spectroscopic signal regression. J Chemom 32(5):e2977
Zhang C, Wu W, Zhou L, Cheng H, Ye X, He Y (2020) Developing deep learning based regression approaches for determination of chemical compositions in dry black goji berries (Lycium ruthenicum Murr.) using near-infrared hyperspectral imaging. Food Chem 319:126536
Feng L, Zhu S, Zhou L, Zhao Y, Bao Y, Zhang C (2019) Detection of subtle bruises on winter jujube using hyperspectral imaging with pixel-wise deep learning method. IEEE Access 7:64494–64505
Gao Z, Yuanyuan Shao Y, Xuan G, Wang Y, Liu Y, Han X (2020) Real-time hyperspectral imaging for the in-field estimation of strawberry ripeness with deep learning. Artif Intell Agric 4:31–38
Arivazhagan S, Shebiah N, Nidhyanandhan S, Ganesan L (2010) Fruit recognition using color and texture features. J Emerg Trends Comput Inf Sci 1:90–94
Zawbaa HM, Abbass M, Hazman M, Hassenian AE (2014) Automatic fruit image recognition system based on shape and color features. In: Advanced Machine Learning Technologies and Applications. (eds.: Hassanien, AE, Tolba, MF, and Taher Azar A) AMLTA. Series: Commun Comput 488: 278–290. https://doi.org/10.1007/978-3-319-13461-1_27
Li D, Zhao H, Zhao X, Gao Q, Xu L (2017) Cucumber detection based on texture and color in greenhouse. Intern J Pattern Recognit Artif Intell 31(1754016):17
Ninawe P, Pandey S (2014) A completion on fruit recognition system using k-nearest neighbours algorithm. Int J Adv Res 3:2352–2356
Liu Z, He Y, Cen H, Lu R (2018) Deep feature representation with stacked sparse auto-encoder and convolutional neural network for hyperspectral imaging-based detection of cucumber defects. Trans ASABE 61:425–436
Raikar MM, Meena SM, Kuchanur C, Girraddi S, Benagi P (2020) Classification and grading of okra-ladies finger using deep learning. Proc Comput Sci 171:2380–2389
Polder G, Blok PM, de Villiers HA, van der Wolf JM, Kamp J (2019) Potato virus y detection in seed potatoes using deep learning on hyperspectral images. Front Plant Sci 10:209
Alajrami MA, Abu-Naser SS (2020) Type of tomato classification using deep learning. Int J Acad Pedagog Res 3:21–25
Da Costa AZ, Figueroa HE, Fracarolli JA (2020) Computer vision based detection of external defects on tomatoes using deep learning. Biosyst Eng 190:131–144
Zhang L, Jia J, Li Y, Gao W, Wang M (2019) Deep learning based rapid diagnosis system for identifying tomato nutrition disorders. KSII T Internet Info 13:4
Mubin NA, Nadarajoo E, Shafri HZM, Hamedianfar A (2019) Young and mature oil palm tree detection and counting using convolutional neural network deep learning method. Int J Remote Sens 40:7500–7515. https://doi.org/10.1080/01431161.2019.1569282
Nasiri A, Taheri-Garavand A, Zhang YD (2019) Image-based deep learning automated sorting of date fruit. Postharvest Biol Technol 153:133–141
Altaheri H, Alsulaiman M, Muhammad G (2019) Date fruit classification for robotic harvesting in a natural environment using deep learning. IEEE Access 7:117115–117133
Bisgin H, Bera T, Ding HJ, Semey HG, Wu LH, Xu J (2018) Comparing SVM and ANN based machine learning methods for species identification of food contaminating beetles. Sci Rep 8:12. https://doi.org/10.1038/s41598-018-24926-7
Ravikanth L, Jayas DS, White NDG, Fields PG, Sun DW (2017) Extraction of spectral information from hyperspectral data and application of hyperspectral imaging for food and agricultural products. Food Bioproc Tech 10(1):1–33
Ropodi AI, Panagou EZ, Nychas GJE (2016) Data mining derived from food analyses using non-invasive/non-destructive analytical techniques; determination of food authenticity, quality & safety in tandem with computer science disciplines. Trends Food Sci Technol 50:11–25. https://doi.org/10.1016/j.tifs.2016.01.011
Song Q, Zheng YJ, Xue Y, Sheng WG, Zhao MR (2017) An evolutionary deep neural network for predicting morbidity of gastrointestinal infections by food contamination. Neurocomputing 226:16–22
Rong D, Wang H, Xie L, Ying Y, Zhang Y (2020) Impurity detection of juglans using deep learning and machine vision. Comput Electron Agric 178:105764
Jiang B, He J, Yang S, Fu H, Li T, Song H, He D (2019) Fusion of machine vision technology and AlexNet-CNNs deep learning network for the detection of postharvest apple pesticide residues. Artif Intell Agric 1:1–8
Neto HA, Tavares WLF, Ribeiro DCSZ, Alves RCO, Fonseca LM, Campos SVA (2019) On the utilization of deep and ensemble learning to detect milk adulteration. BioData Min 12(1):13
Zheng M, Zhang Y, Gu J, Bai Z, Zhu R (2021) Classification and quantification of minced mutton adulteration with pork using thermal imaging and convolutional neural network. Food Control 126:108044. https://doi.org/10.1016/j.foodcont.2021.108044
Wei Z, Yang Y, Wang J, Zhang W, Ren Q (2018) The measurement principles, working parameters and configurations of voltammetric electronic tongues and its applications for foodstuff analysis. J Food Eng 217:75–92
Kiranmayee AH, Panchariya PC, Sharma AL (2012) New data reduction algorithm for voltammetric signals of electronic tongue for discrimination of liquids. Sens Actuators A 187:154–161
Taheri-Garavand A, Fatahi S, Omid M, Makino Y (2019) Meat quality evaluation based on computer vision technique: a review. Meat Sci 156:183–195
Yang Z, Gao J, Wang S, Wang Z, Li C, Lan Y, Sun X, Shengxi Li S (2021) Synergetic application of E-tongue and E-eye based on deep learning to discrimination of Pu-erh tea storage time. Comput Electron Agric 187:106297
Shi Q, Guo T, Yin T, Wang Z, Li C, Sun X, Guo Y, Yuan W (2018) Classification of Pericarpium Citri Reticulatae of different ages by using a voltammetric electronic tongue system. Int J Electrochem Sci 13:11359–11374. https://doi.org/10.20964/2018.12.45
Hughes D, Salathé M (2015) An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv preprint arXiv:1511.08060
Liu L, Wang R, Xie C, Yang P, Wang F, Sudirman S, Liu W (2019) PestNet: an end-to-end deep learning approach for large-scale multi-class pest detection and classification. IEEE Access 7:45301–45312. https://doi.org/10.1109/access.2019.2909522
Zhang J (2010) Multi-source remote sensing data fusion: status and trends. Int J Image Data Fusion 1(1):5–24
Li H, Lin Z, Shen X, Brandt J, Hua G (2015) A convolutional neural network cascade for face detection. In 2015 IEEE conference on computer vision and pattern recognition (CVPR). IEEE 5325–5334
Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359
Wengang Z, Teck Chee Goh A, Runhong Z, Yongqin L, Ning W (2020) Chapter 19-Back-propagation neural network modeling on the load–settlement response of single piles. In: Samui P, Tien Bui D, Chakraborty S, Deo RC (eds) Handbook of probabilistic models. Butterworth-Heinemann, pp 467–487
Sharma DK, Chatterjee M, Kaur G, Vavilala S (2022) 3-Deep learning applications for disease diagnosis. In: Gupta D, Kose U, Khanna A, Balas VE (eds) Deep learning for medical applications with unique data. Academic Press, pp 31–51
Maheswari P, Raja P, Apolo-Apolo OE, Pérez-Ruiz M (2021) Intelligent fruit yield estimation for orchards using deep learning based semantic segmentation techniques-a review. Front Plant Sci 12:1247. https://doi.org/10.3389/fpsyg.2020.513474
Cho J, Lee K, Shin E, Choy G, Do S (2015) Medical image deep learning with hospital PACS dataset. ArXiv Prepr ArXiv151106348
Femin A, Biju KS (2020) Accurate Detection of Buildings from Satellite Images using CNN. In: 2020 international conference on electrical, communication, and computer engineering (ICECCE). pp 1–5
Bishop JM (2021) Artificial intelligence is stupid and causal reasoning will not fix it. Front Psychol 11:2603. https://doi.org/10.3389/fpls.2021.684328
Hair JF, Sarstedt M (2021) Data, measurement, and causal inferences in machine learning: opportunities and challenges for marketing. J Mark Theory Pract 29:65–77. https://doi.org/10.1080/10696679.2020.1860683
Kuang K, Li L, Geng Z, Xu L, Zhang K (2020) Causal Inference. Engineering 6:253–263. https://doi.org/10.1016/j.eng.2019.08.016
Apolo-Apolo OE, Martínez-Guanter J, Egea G, Raja P, Pérez-Ruiz M (2020) Deep learning techniques for estimation of the yield and size of citrus fruits using a UAV. Eur J Agron 115:126030. https://doi.org/10.1016/j.eja.2020.126030
Saranya N, Srinivasan K, Pravin Kumar SK, Rukkumani V, Ramya R (2020) Fruit classification using traditional machine learning and deep learning approach. In: Smys S, Tavares JMRS, Balas VE, Iliyasu AM (eds) Computational vision and bio-inspired computing. Springer International Publishing, Cham, pp 79–89
Mohameth F, Bingcai C, Sada KA (2020) Plant disease detection with deep learning and feature extraction using plant village. J Comput Commun 8:10–22
Ghahramani Z (2015) Probilistic machine learning and artificial intelligence. Nature 521:452–459
Zhang R, Jing X, Wu S, Jiang C, Mu J, Richard YuF (2021) Device-free wireless sensing for human detection: the deep learning perspective. IEEE Internet Things J 8:2517–2539. https://doi.org/10.1109/JIOT.2020.3024234
Curilem M, Canrio J, Franco L, Rios R (2018) Using CNN to classify spectrograms of seismic events from llaima volcano (Chile). In: proceedings of the international joint conference on neural networks pp. 1–8. https://doi.org/10.1109/IJCNN.2018.18598489285
Zhou Y, Yue H, Zhou S, Kong Q (2019) Hybrid event detection and phase-picking algorithm using convolutional and recurrent neural networks. Seismol Res Lett 90:1079–1087. https://doi.org/10.1785/0220180319
Ma Z, Mei G (2021) Deep learning for geological hazards analysis: Data, models, applications, and opportunities. Earth Sci Rev 223:103858. https://doi.org/10.1016/j.earscirev.2021.103858
Baltrusaitis T, Ahuja C, Morency LP (2019) Multimodal machine learning: a survey and taxonomy. IEEE Trans Pattern Anal Mach Intell 41:423–443. https://doi.org/10.1109/TPAMI.2018.2798607
Mousavi S, Zhu W, Sheng Y, Beroza G (2019) Cred: a deep residual network of convolutional and recurrent units for earthquake signal detection. Sci Rep 9:1–14. https://doi.org/10.1038/s41598-019-45748-1
Montavon G, Sanick W, Muller KR (2017) Methods for interpreting and understanding deep learning networks. Digit Signal Process 73:1–1
Bai YT, Jin XB, Wang XY, Wang XK, Xu JP (2020) Dynamic correlation analysis method of air pollutants in spatio-temporal analysis. Int J Environ Res Public Health 17:360
Zhang S, Zhang S, Zhang C, Wang X, Shi Y (2019) Cucumber leaf disease identification with global pooling dilated convolutional neural network. Comput Electron Agric 162:422–430. https://doi.org/10.1016/j.compag.2019.03.012
Anagnostis A, Asiminari G, Papageorgiou E, Bochtis D (2020) A convolutional neural networks based method for anthracnose infected walnut tree leaves identification. Appl Sci 10(2):469. https://doi.org/10.3390/app10020469
Butte S, Vakanski A, Duellman K, Wang H, Mirkouei A (2021) Potato crop stress identification in aerial images using deep learning-based object detection. arXiv preprint arXiv:2106.07770. https://doi.org/10.1002/agj2.20841
Zhang F, Wu S, Liu J, Wang C, Guo Z, Xu A, Pan K, Pan X (2021) Predicting soil moisture content over partially vegetation covered surfaces from hyperspectral data with deep learning. Soil Sci Soc Am J. https://doi.org/10.1002/saj2.20193
Elbeltagi A, Deng J, Wang K, Malik A, Maroufpoor S (2020) Modeling long-term dynamics of crop evapotranspiration using deep learning in a semi-arid environment. Agric Water Manag 241:106334
ElSaadani M, Habib E, Abdelhameed AM, Bayoumi M (2021) Assessment of a spatiotemporal deep learning approach for soil moisture prediction and filling the gaps in between soil moisture observations. Front Artif Intell 4:636234. https://doi.org/10.3389/frai.2021.636234
Author information
Authors and Affiliations
Contributions
SKC contributed to conceptualization, writing, editing, supervision; NSC contributed to writing, compiling tables, reviewing; DJ contributed to writing, preparing figures, reviewing; MKT, YAR and SA contributed to writing and editing. All the authors agreed to publish this paper in the present form.
Corresponding authors
Ethics declarations
Conflicts of interest
The authors declare they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Chakraborty, S.K., Chandel, N.S., Jat, D. et al. Deep learning approaches and interventions for futuristic engineering in agriculture. Neural Comput & Applic 34, 20539–20573 (2022). https://doi.org/10.1007/s00521-022-07744-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-022-07744-x