Abstract
The growing demand for flaxseed as a source of healthy edible oil mandates the need for adopting novel strategies for preserving its quantity and quality. Mechanical damage during harvest and handling is one of the important threats that can adversely affect the quality and viability of flaxseeds. Currently, mechanical damage assessment in grains is mainly performed by human visual inspection, which is a subjective and time-consuming procedure. In this study, the authors propose to utilize radiographic imaging with the machine and deep learning tools to characterize the mechanical damage in flaxseeds intelligently. Images were acquired under four levels of mechanical damage, and two strategies were used to discriminate seeds’ damage: pattern recognition and convolutional neural network (CNN). In the former case, 69 morphological, color, and texture features were extracted. Various classifiers, namely, linear discriminant analysis (LDA), K-nearest neighbors (KNN), support vector machines (SVM), and decision trees were used for the analysis. SVM provided the best performance with a classification accuracy of 87.4%. Furthermore, the analysis of variance (ANOVA) F-test feature selection algorithm was utilized, and the 17 most effective features were selected to be used with an SVM classifier to classify seeds with 88.4% accuracy. In the case of CNN-based classifiers, six state-of-the-art architectures were employed including EfficientNet-B0, VGG19, Resnet18, MobileNet-v2, Inception-v3, and Xception. Among them, EfficientNet-B0 provided superior performance with a classification accuracy of 91.0%. The developed models’ high accuracy confirms the capabilities of radiographic imaging and artificial intelligence tools for rapid, reliable, and automated assessments of mechanical damage in flaxseeds.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
Oilseeds are a prominent part of diets worldwide and are well-known as the primary source of edible oil. Among oilseeds, flaxseeds are highly desirable due to the high contents of alpha-linolenic acid (an omega-3 fatty acid), fiber, and plant lignans. The global trade value of flaxseed was 425.3 million US dollars in 2021, with an expected compound annual growth rate of 12.8% in 2021–2026 (Mordorintelligence, 2022). The growth in the market demand mandates minimizing the loss in quality and quantity of flaxseed throughout the supply chain. Therefore, threats such as spoilage, insect infestation, and mechanical stress should be closely monitored and mitigated. To this end, scholars have evaluated the effect of various potential contributing factors to control the aforementioned threats.
In the case of mechanical stress, damaged seeds may lose viability and yield or become susceptible to insect and fungal infestation, ultimately downgrading sample quality and price. A detailed overview of the potential adverse effects of mechanical damage on seed properties can be found elsewhere (Chen et al., 2020). To minimize the mentioned problems, scientists have focused on exploring the effect of moisture content (MC) and/or impact stress on the breakage susceptibility of various seeds (Erkinbaev et al., 2019; Khazaei et al., 2008; Nadimi et al., 2022; Shahbazi, 2011; Shahbazi et al., 2012, 2014, 2017). The published works have indicated that the appropriate selection of MC or maximum induced impact stress could minimize the mechanical damage to seeds.
Despite several previous works in this domain, the majority of prior efforts investigated the effect of MC and impact energy (IE) on the exterior surfaces of the samples through a visual inspection (Erkinbaev et al., 2019; Khazaei et al., 2008; Shahbazi, 2011; Shahbazi et al., 2012, 2014, 2017). However, visual inspection is cumbersome, slow, subjective, and limited to detecting apparent external damage. Moreover, some studies revealed that seeds’ external and internal damage may not always be highly correlated (Nadimi et al., 2022). Hence, developing a rapid, reliable, and intelligent system that could automatically assess seeds’ mechanical damage beyond the surface has always been of great interest. In this regard, Nadimi et al. (2022) recently demonstrated the capabilities of radiographic imaging and machine vision techniques in evaluating internal mechanical damage to flaxseeds. Two simple percentile-based classification algorithms were developed using the gray level distributions of radiographic images of mechanically damaged flaxseeds to classify them into two broad groups of nil/low and medium/high damage. The authors suggested the implementation of advanced machine learning algorithms to better discriminate the mechanically damaged seeds, which was not in the scope of their study and hence was not explored (Nadimi et al., 2022).
The capability of state-of-the-art data analysis tools such as machine learning and deep learning in fruit and grain quality evaluation has been already demonstrated in several works (Divyanth et al., 2022a; Erkinbaev et al., 2022; Hosainpour et al., 2022; Li et al., 2022; Nadimi et al., 2021; Sabzi et al., 2022). For instance, scholars have reported the applications of image processing (Anami et al., 2015; Chaugule & Mali, 2014; Cubero et al., 2011; Dubey et al., 2006) and machine learning–based models in estimating the ripeness of fruits (Kangune et al., 2019; Kheiralipour et al., 2022; Khojastehnazhand et al., 2019; Nanyam et al., 2012), identifying grain dockage (Paliwal et al., 2003; Sharma & Sawant, 2017), and segregating grain types (Arora et al., 2020; Velesaca et al., 2021). Similarly, the efficacy of convolutional neural network (CNN) models to monitor grain quality, detect infestations, classify grain grades and types, and identify damaged kernels, has been reported in various studies (Bhupendra et al., 2022; Cubero et al., 2011; Divyanth et al., 2022b; Velesaca et al., 2021). Despite the promising results, to our knowledge, there has not been any effort in utilizing the aforementioned techniques to assess mechanical damage in flaxseed, an economically important nutraceutical and industrial oilseed. To address this knowledge gap, the present study aimed to employ machine learning and deep learning tools to classify mechanically damaged flaxseeds into four groups, viz., no damage (ND), low damage (LD), medium damage (MD), and high damage (HD).
Materials and Methodology
Samples
The samples and radiographic images being used in this study were previously explained in detail elsewhere (Nadimi et al., 2022). In summary, flaxseeds at three levels of MC (6, 8, and 11.5%) were subjected to four different stress levels, viz., 0 (control), 2, 4, and 6 mJ forming 3 × 4 = 12 treatments. For each treatment, three replicates of 100 seeds were imaged using a soft 2D X-ray imaging system (model: MX-20, Faxitron Bioptics, LLC, Tucson, AZ). Overall, 3600 seeds (3 (MC) × 4 (stress levels) × 100 (seeds) × 3 (replicates)) were imaged in this study.
Search Workflow
As illustrated in Fig. 1, the proposed image processing algorithm involved image pre-processing, image labelling, feature extraction/selection, and image classification, which are discussed in the subsequent sections. All analyses were performed using MATLAB (R2022a, Mathworks Inc., Waltham, MA) software, with its statistics and machine learning, image processing, and deep learning toolboxes. The MATLAB application was run on Acer Nitro 5 Intel Core i5 9th Generation Laptop (32 GB/1 TB HDD/Windows 10 Home/GTX 1650 Graphics).
Image Pre-processing
The pre-processing of radiographic images (Fig. 2a) consisted of five main steps (i) image enhancement using the imadjust function (Fig. 2b), (ii) image binarization through global thresholding (imbinarize) (Fig. 2c), (iii) applying morphological opening operation (image dilation followed by erosion) on the mask, (iv) obtaining the corresponding masked image (Fig. 2d), and (v) extraction of individual seeds (Fig. 2e) using bounding box coordinates (regionprops function) of the mask. Seeds with undesired segmentation (such as overlapping seeds) were removed from the dataset (~ 4.6% of the entire data set). Table 1 summarizes the applied image pre-processing steps with the corresponding MATLAB functions.
Seed Labelling
To get a comprehensive prediction of the severity of damage in flaxseeds, the individual seeds extracted from the “Image Pre-processing” section were carefully explored and segregated into four classes, i.e., ND, LD, MD, and HD. Damage usually was identified as a crack, or indentation detectable in radiographic images (see Fig. 3). The ND class represented sound/undamaged seeds (Fig. 3a), flaxseeds with slight damage (minor cracks) were assigned to LD (Fig. 3b), flaxseeds with multiple minor cracks and slight to medium indentations were assigned to MD (Fig. 3c), and HD seeds contained severe indentations and cracks (Fig. 3d). As expected, most of the ND seeds belonged to 0 mJ and/or 2 mJ IE, many flaxseeds of the LD class were from 2 mJ or 4 mJ IE categories, and most of the MD and HD seeds were impacted with 4 mJ and 6 mJ IE, respectively.
Seed Damage Analysis
As previously mentioned, algorithms needed to be developed to classify the flaxseeds into four classes, namely, ND, LD, MD, and HD. Two main strategies were deployed for this purpose—machine learning–based pattern recognition and a CNN-based approach (details are provided in the sections “Machine Learning and Pattern Recognition” and “Convolutional Neural Network”).
The image distribution in the dataset was as follows: 1452 images were in ND class, 723 in LD, 718 in MD, and 542 in HD. About 70% of images in each class were reserved for training, while other images were used as the test dataset (Table 2 provides a detailed information on the dataset). The precision, recall, accuracy, and mean F1-score evaluation metrics were used to statistically analyze the classification performances. For a given class, precision is defined as the ratio of true positives (TP) to the total number of objects predicted for this class (TP + false positives (FP)), while recall is the ratio of TP to the actual number of objects in that class (TP + false negatives (FN)). The F1-score is the harmonic mean of precision and recall. Accuracy is the percentage of samples correctly classified (TP + true negatives (TN)) by the model. The equations of the abovementioned metrics are provided in Eqs. (1)–(4):
Machine Learning and Pattern Recognition
As mentioned in the “Introduction” section, several previous works have utilized image texture, morphology, and color (TMC) features to examine the quality of agri-food products (Kheiralipour et al., 2022; Sabzi et al., 2022). Herein, an analogous approach was used to explore the feasibility of such information to assess mechanical damage in flaxseeds. The gray level co-occurrence matrix (GLCM) and gray level run-length matrix (GLRM) were used to derive the textural features. The GLCM is a measure of how often various combinations of pixel values (or gray levels) occur in a gray scale digital image (Mall et al., 2019). GLRM, on the other hand, represents the occurrences of consecutive and collinear pixels of similar gray levels in the image (Preetha et al., 2018). Texture feature calculations use the contents of GLCM and GLRM to provide a measure of the variation in image texture (pixel values) at the pixel of interest. For feature extraction, only the region of interest (i.e., flaxseed) was used to extract information. The ROI was quantified into 16 gray levels (selected after trial-and-error on gray levels of 8, 16, 32, and 64). For each quantized X-ray image, four GLCM and four GLRM matrices with the orientations Θj ∈ [0°, 45°, 90°, 135°] were computed. Four statistics, namely, variance/inertia, correlation, uniformity, and homogeneity, were extracted from every GLCM matrix. From the GLRM matrices, 11 features were extracted, namely, short-run emphasis (SRE), long-run emphasis (LRE), gray level non-uniformity (GLN), run length non-uniformity (RLN), run percentage (RP), low gray level run emphasis (LGRE), high gray level run emphasis (HGRE), short-run low gray level emphasis (SRLGE), short-run high gray level emphasis (SRHGE), long-run low gray level emphasis (LRLGE), and long-run high gray level emphasis (LRHGE). The morphological features were the ROI’s regular area, convex area, perimeter, eccentricity, major axis length, minor axis length, and circularity. The mean and standard deviation (SD) of the pixel intensities in the gray-scale ROI were utilized as two additional color features. Thus, a total of 69 features were extracted from each seed including 60 textural (4 features from GLCM × 4 orientations, and 11 features from GLRM × 4 orientations), seven morphological, and two color.
Machine learning algorithms, namely, linear discriminant analysis (LDA), K-nearest neighbors (KNN), support vector machines (SVMs), and decision trees were employed as classifiers on the above-derived features. The results of SVM have been discussed in detail in the “Results and Discussion” section due to its superior performance. The other classifiers’ results have been attached as supplementary material (Table S1).
It should be noted that non-linear kernel-based classifiers such as SVM have demonstrated advantages over other machine learning algorithms in many similar studies (Divyanth et al., 2022b, c; Neelakantan, 2021; Sujatha et al., 2021; Wang & Paliwal, 2006) as these classifiers are known for their memory efficiency, faster prediction, and better computational complexity. The TMC-extracted data were z-score normalized (with a mean of 0 and standard deviation of 1) and the “quadratic” kernel function was used for the SVM classifier (optimized using the Classification Learner app).
Initially, all the features were used to develop the classification model. However, since redundant features increase the complexity of the model, such features were eliminated through variable importance analysis. In this study, a well-established statistical approach for means comparison, the analysis of variance (ANOVA) F-test algorithm was used to determine the optimal features (Johnson & Synovec, 2002; Kumar et al., 2015; Pathan et al., 2022). Subsequently, another SVM-based classification model was developed using only the optimum features.
Convolutional Neural Network
A typical CNN is designed using the following set of layers: convolution layers, which are defined by the convolution filters that extract semantic features from the previous layers; pooling layers, which reduce the dimensions of the data by connecting a group of neurons from the previous layer to a single neuron, thus minimizing the computational requirements and help in generalizing the features; and fully connected layers, which process the activations/features in the form of flattened matrices to classify the image.
Herein, we used a transfer learning approach and evaluated the performance of six pre-built powerful and popular deep convolutional networks, viz., EfficientNet-B0 (Tan & Le, 2019), VGG19 (Simonyan & Zisserman, 2014), Resnet18 (He et al., 2015), MobileNet-v2 (Sandler et al., 2018), Inception-v3 (Szegedy et al., 2014), and Xception (Chollet, 2016). Transfer learning offers reduced training time in differentiating between classes. The results of EfficientNet-B0 have been discussed in detail in the “Results and discussion” section due to its better performance. Results of other CNNs have been attached as supplementary material (Table S2) for comparison.
In the EfficientNet group of networks (Tan & Le, 2019), the three dimensions of width, depth, and resolution are scaled with a constant ratio (the technique is called the compound scaling method), instead of arbitrarily scaling up. A new baseline network was created and then scaled up according to the computational requirement. A compound scaling coefficient \(\upphi\) is defined that denotes the number of resources available to determine the scaling of \(\alpha\), \(\beta\), and \(\gamma\), where \(depth\ (d)=\alpha\), \(width\;(w)=\beta^\phi\), and \(resolution\ (r)= {\gamma }^{\upphi }\). The restraint \((\alpha \times {\beta }^{2} \times {\gamma }^{2}) \approx 2\) was enforced, such that the total floating-point operations per second (FLOPS) is not more than \(\mathrm2^\phi\) for a given scaling factor. A grid search strategy was used to identify the relationship between different scaling dimensions of the baseline network under the fixed resource constraint.
In the network used in this study, the value of \(\upphi\) was set to 1; hence, the values of \(\alpha\), \(\beta\), and \(\gamma\) were found to be 1.2, 1.1, and 1.15, respectively. The architecture comprises mobile inverted bottleneck convolutions (also called inverted residual blocks), where the skip connections are made between the narrow parts, i.e., the start and end of the block (introduced in MobileNetv2 model (Sandler et al., 2018)). In the residual blocks, the first step widens the network using a 1 × 1 convolution, which is followed by a 3 × 3 depth-wise convolution, and then a 1 × 1 convolution again to shrink the network to match the initial number of channels. The network was pre-trained on Imagenet dataset (Deng et al., 2010) before training on our data.
The architecture of the CNN model is presented in Fig. 4. Since the last three layers in the original network are configured for 1000 classes (number of classes in Imagenet), they were replaced by a new set of fully connected (FC) layer, softmax layer, and a classification layer corresponding to four output classes. To fit the input size of the network, the images were resized to a dimension of 224 × 224 pixels by zero padding along the boundaries. Zero padding makes sure that the morphological representations of the ROI (like the area and perimeter) are not impaired, unlike interpolation-based image resizing operations. Image geometry-based augmentation techniques, such as translation along x- and y-axes, random rotations (+ 90 to − 90), and x- and y-axes mirroring were specified in the training data. The stochastic gradient descent with momentum (sgdm) was chosen as the network training optimizer, with the following hyperparameters: initial learn rate of 0.001; momentum of 0.9; weight decay factor of 0.0001; and a mini-batch size of 32. The maximum number of epochs was limited to 200, and an early stopping condition was enabled.
To evaluate the performance of the CNNs, the models’ accuracy (Eq. 4) and cross-entropy loss were assessed. The cross-entropy loss can be expressed as (Altuwaijri & Muhammad, 2022; Ji et al., 2022):
where n is the number of classes, ti is the correct (truth) label (either 0 or 1), and pi is the softmax probability for the ith class. More details on cross-entropy loss calculations are available elsewhere (Ji et al., 2022; Mahjoubi et al., 2022; Matlab Crossentropy, 2022; Yeung et al., 2022).
The details of CNN architectures for MobileNet, Inception, Resnet18, VGG19, and Xception can be found in the original research papers (Chollet, 2016; He et al., 2015; Sandler et al., 2018; Simonyan & Zisserman, 2014; Szegedy et al., 2014). Indeed, similar to the EfficientNet-B0 model described above, the final layers (FC, softmax, and classification layers) were adjusted according to our 4-class data, and the images were resized based on the given network’s input size requirement.
Results and Discussion
The internal and external damages in seeds were noticeable as darker regions in the X-ray images; i.e., the gray value at the impaired region was significantly less compared to the sound portions of the flaxseeds (see Fig. 3). As mentioned in the “Seed Damage Analysis” section, two different approaches were utilized to classify flaxseeds based on their severity of the damage.
Table 3 shows the results of the SVM classification models for classifying the mechanical damage in flaxseeds. The classifier using all the image features achieved an overall classification accuracy of 87.4%. The overall precision and recall for the model were 88.1% and 81.9%, respectively. The corresponding confusion matrix is provided in Fig. 5a. As anticipated, the flaxseeds of the ND class were classified with the highest precision of 92.7% and recall of 99.0%. From the confusion matrix, it can be observed that some seeds of the LD class were misclassified as ND and thus the reason for the LD class’s reduced recall value (72.7%). Also, its poor precision (70.8%) was due to the misclassifications from HD flaxseeds (hence the reduced recall of the MD class). The HD class showed an appreciable F1-score of 90.3%. Most of the misclassifications were reported to the classes representing the severity of damage adjacent to the true class.
Some previous studies report that SVMs tend to overfit when too many features are utilized to develop the model (Koklu et al., 2022; Thaiyalnayaki & Joseph, 2021). Hence, as suggested earlier, the redundant features were removed through the ANOVA approach. The rankings of the features based on the importance scores are provided in Fig. 6. Interestingly, among the top 30 features, 24 were derived from the GLRM, including GLN, LGRE, LRHGE, LRLGE, and RLN. Out of the remaining six features, two belonged to color, and four were morphological features.
Considering the observed differences between feature important scores, variables with scores over 100 were considered optimal and were used to develop another SVM-based classification model. This means only 25% of TMC features were kept for further analysis. Those features include GLN (0°, 45°, 90°, 135°), average intensity, LGRE (0°, 45°, 90°), LRHGE (0°, 45°, 90°, 135°), LRLGE (45°, 90°, 135°), RLN (90°), and SRE (90°).
After removing the redundant feature representations, the classification accuracy improved slightly to 88.4% from the previously achieved 87.4% (Table 3; Fig. 5b presents the confusion matrix). The total misclassification cost for the MD class decreased by around 10%. There was no improvement in predicting images of the HD class; however, the precision of the LD class and recall rate of the MD class showed some improvement. These results validate the potential of the implemented optimum feature selection strategy in reducing the computation time and power without compromising the system performance.
The classification performance of the CNN model is illustrated in the confusion chart (Fig. 7). The CNN training was stopped early (coordinated using the fivefold cross-validation (CV) loss (Eq. 5) and CV accuracy (Eq. 4) after nearly 2100 iterations to avoid overfitting. The CV accuracy reached > 80.0% soon after almost the 500th iteration; however, the rate of increase was very gradual for the next 400 iterations and reached a saturation point (the training plot has been depicted in Fig. 8). An overall accuracy of 91.0% was achieved on the test data and the final classification accuracy was 91.6% on fivefold cross-validation. Looking at the matrix, the model was able to identify ND and HD flaxseeds with almost 100% and > 96% recall rates, respectively. High precision values (> 93%) were obtained for all classes except LD (76%). The LD class experienced relatively poor precision (compared to other classes) since a noticeable amount of the LD flaxseed samples were misclassified as MD and vice versa. The activation maps from the intermediate layers of the network were also inspected (Fig. 9). The model tends to learn finer and finer details present in the image as we move to the deeper layers. The initial layers just present the outlines of the shapes; however, the activations seem to fade and become more abstract as it passes through subsequent layers of the network.
Undoubtedly, CNN provides the best performance among the three classification models with the highest accuracy of 91.0%. The precision and recall rates for ND and HD classes were > 94%, with the MD class securing a recall rate of 93.2%. It can be noticed from Fig. 7 that the number of misclassified images of the MD-HD class has been reduced to a great extent when compared with the confusion matrices produced by feature extraction techniques.
In a relevant study (Nadimi et al., 2022), a percentile method based on SVM and LDA was adopted on the flaxseed X-ray image’s gray level distribution for a similar classification task. However, the maximum classification accuracies for 2-class and 4-class classifications were limited to 87.2% and 60.0%, respectively, which were obtained using an SVM model. This study proves that the CNN model outperforms the previous models as the accuracies for 2-class and 4-class classification could be obtained as 95.2% and 91.0%, respectively.
It is worth mentioning that image feature extraction techniques have accorded appreciable performances for grain quality assessment in the literature. An accuracy of 99.6% was achieved by Singh and Chaudhury (2020) for classifying eight rice varieties using textural features from GLCM and GLRM. Sapirstein et al. (1987) developed a discriminant analysis model primarily on grain morphological features (such as kernel length and width, area, aspect ratio, and contour length) that yielded 99.0% accuracy for classifying wheat, rye, barley, and oats in a four-way admixture. On a similar note, Visen et al. (2003) used textural and color characteristics to identify unknown grain types with over 90% accuracy. A high-speed system based on digital imaging was developed to identify defects in wheat kernels one by one using morphological and textural features of images captured at opposite angles (Delwiche et al., 2013). Analogous to our study, the derived morphological features were the area, perimeter, eccentricity, and major and minor axis lengths. In another study, an artificial neural network was used as a classifier on TMC extracted grain image features for the identification of mechanical damage to corn and barley (Nowakowski et al., 2011).
Despite all the research works mentioned above, our thorough literature review indicates that the present work is the first to utilize machine learning and deep learning algorithms to assess mechanical damage in flaxseeds. The developed model has the potential to be implemented as a pre-screening technique in the agriculture industry to reduce the time and labor currently used in the mechanical damage assessment of grain and oilseeds.
Conclusion
To the best of our knowledge, this work is the first in-depth exploration of mechanical damage to flaxseed using radiographic imaging and artificial intelligence algorithms. Various machine learning and deep learning tools such as pattern recognition, features selection, and transfer learning were used. The features selection revealed that the average pixel intensity, GLCM- and GLRM-derived features were among the most contributing features in discriminating the severity of mechanical damage. However, the best performance was achieved using the EfficientNet-B0 CNN model, where the damaged flaxseeds were classified into four classes with an accuracy of 91.0%.
We believe the developed model can open a promising pathway for the automated detection of mechanical damage in the grain and seeds industry through further research.
Data Availability
The datasets generated during the current study are available from the corresponding author on reasonable request.
References
Altuwaijri, G. A., & Muhammad, G. (2022). A Multibranch of Convolutional Neural Network Models for Electroencephalogram-Based Motor Imagery Classification. Biosensors, 12(1), 22. https://www.mdpi.com/2079-6374/12/1/22
Anami, B. S., Naveen, N. M., & Hanamaratti, N. G. (2015). Behavior of HSI Color Co-Occurrence Features in Variety Recognition from Bulk Paddy Grain Image Samples. International Journal of Signal Processing, 8(4), 19–30. https://doi.org/10.14257/ijsip.2015.8.4.02
Arora, B., Bhagat, N., Saritha, L., & Arcot, S. (2020). Rice Grain Classification using Image Processing Machine Learning Techniques. Proceedings of the 5th International Conference on Inventive Computation Technologies, ICICT 2020, 205–208. https://doi.org/10.1109/ICICT48043.2020.9112418
Bhupendra, M., & K., Miglani, A., & Kumar Kankar, P. (2022). Deep CNN-based damage classification of milled rice grains using a high-magnification image dataset. Computers and Electronics in Agriculture, 195, 106811. https://doi.org/10.1016/J.COMPAG.2022.106811
Chaugule, A., & Mali, S. N. (2014). Evaluation of Texture and Shape Features for Classification of Four Paddy Varieties. Journal of Engineering (united Kingdom). https://doi.org/10.1155/2014/617263
Chen, Z., Wassgren, C., & Kingsly Ambrose, R. P. (2020). A Review of Grain Kernel Damage: Mechanisms, Modeling, and Testing Procedures. Transactions of the ASABE, 63, 455–475. https://doi.org/10.13031/trans.13643
Chollet, F. (2016). Xception: Deep Learning with Depthwise Separable Convolutions. Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017-Janua, 1800–1807. https://doi.org/10.48550/arxiv.1610.02357
Cubero, S., Aleixos, N., Moltó, E., Gómez-Sanchis, J., & Blasco, J. (2011). Advances in Machine Vision Applications for Automatic Inspection and Quality Evaluation of Fruits and Vegetables. Food and Bioprocess Technologyechnology, 4, 487–504. https://doi.org/10.1007/s11947-010-0411-8
Delwiche, S. R., Yang, I. C., & Graybosch, R. A. (2013). Multiple view image analysis of freefalling U.S. wheat grains for damage assessment. Computers and Electronics in Agriculture, 98, 62–73. https://doi.org/10.1016/J.COMPAG.2013.07.002
Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2010). ImageNet: A large-scale hierarchical image database. 248–255. https://doi.org/10.1109/CVPR.2009.5206848
Divyanth, L. G., Chakraborty, S., Li, B., Weindorf, D. C., Deb, P., & Gem, C. J. (2022c). Non-destructive Prediction of Nicotine Content in Tobacco Using Hyperspectral Image–Derived Spectra and Machine Learning. Journal of Biosystems Engineering 2022a 47:2, 47(2), 106–117. https://doi.org/10.1007/S42853-022-00134-0
Divyanth, L. G., Chelladurai, V., Loganathan, M., Jayas, D. S., & Soni, P. (2022b). Identification of Green Gram (Vigna radiata) Grains Infested by Callosobruchus maculatus Through X-ray Imaging and GAN-Based Image Augmentation. Journal of Biosystems Engineering, 2022, 1–16. https://doi.org/10.1007/S42853-022-00147-9
Divyanth, L. G., Guru, D. S., Soni, P., Machavaram, R., Nadimi, M., & Paliwal, J. (2022a). Image-to-image translation-based data augmentation for improving crop/weed classification models for precision agriculture applications. Algorithms, 15(11), 401. https://doi.org/10.3390/a15110401
Dubey, B. P., Bhagwat, S. G., Shouche, S. P., & Sainis, J. K. (2006). Potential of Artificial Neural Networks in Varietal Identification using Morphometry of Wheat Grains. Biosystems Engineering, 95(1), 61–67. https://doi.org/10.1016/J.BIOSYSTEMSENG.2006.06.001
Erkinbaev, C., Morrison, J., & Paliwal, J. (2019). Assessment of seed germinability of mechanically-damaged soybeans using near-infrared hyperspectral imaging. Canadian Biosystems Engineering. https://doi.org/10.7451/cbe.2019.61.7.1
Erkinbaev, C., Nadimi, M., & Paliwal, J. (2022). A unified heuristic approach to simultaneously detect fusarium and ergot damage in wheat. Measurement: Food, 7, 100043. https://doi.org/10.1016/j.meafoo.2022.100043
He, K., Zhang, X., Ren, S., & Sun, J. (2015). Deep Residual Learning for Image Recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016-Decem, 770–778. https://doi.org/10.48550/arxiv.1512.03385
Hosainpour, A., Kheiralipour, K., Nadimi, M., & Paliwal, J. (2022). Quality assessment of dried white mulberry (Morus alba L.) using machine vision. Horticulturae, 8(11), 1011. https://doi.org/10.3390/horticulturae8111011
Ji, A., Quek, Y. T., Wong, E., & Woo, W. L. (2022). Detection and Classification System for Rail Surface Defects Based on Deep Learning. In IRC-SET 2021. 255–267. Springer, Singapore. https://doi.org/10.1007/978-981-16-9869-9_20
Johnson, K. J., & Synovec, R. E. (2002). Pattern recognition of jet fuels: Comprehensive GC×GC with ANOVA-based feature selection and principal component analysis. Chemometrics and Intelligent Laboratory Systems, 60(1–2), 225–237. https://doi.org/10.1016/S0169-7439(01)00198-8
Kangune, K., Kulkarni, V., & Kosamkar, P. (2019). Grapes Ripeness Estimation using Convolutional Neural network and Support Vector Machine. 2019 Global Conference for Advancement in Technology, GCAT 2019. https://doi.org/10.1109/GCAT47503.2019.8978341
Khazaei, J., Shahbazi, F., Massah, J., Nikravesh, M., & Kianmehr, M. H. (2008). Evaluation and modeling of physical and physiological damage to wheat seeds under successive impact loadings: Mathematical and neural networks modeling. Crop Science, 48(4), 1532–1544. https://doi.org/10.2135/cropsci2007.04.0187
Kheiralipour, K., Nadimi, M., & Paliwal, J. (2022). Development of an Intelligent Imaging System for Ripeness Determination of Wild Pistachios. Sensors, 22, 7134. https://doi.org/10.3390/s22197134
Khojastehnazhand, M., Mohammadi, V., & Minaei, S. (2019). Maturity detection and volume estimation of apricot using image processing technique. Scientia Horticulturae, 251, 247–251. https://doi.org/10.1016/J.SCIENTA.2019.03.033
Koklu, M., Unlersen, M. F., Ozkan, I. A., Aslan, M. F., & Sabanci, K. (2022). A CNN-SVM study based on selected deep features for grapevine leaves classification. Measurement, 188, 110425. https://doi.org/10.1016/J.MEASUREMENT.2021.110425
Kumar, M., Rath, N. K., Swain, A., & Rath, S. K. (2015). Feature Selection and Classification of Microarray Data using MapReduce based ANOVA and K-Nearest Neighbor. Procedia Computer Science, 54, 301–310. https://doi.org/10.1016/J.PROCS.2015.06.035
Li, X., Guillermic, R. M., Nadimi, M., Paliwal, J., & Koksel, F. (2022). Physical and microstructural quality of extruded snacks made from blends of barley and green lentil flours. Cereal Chemistry. https://doi.org/10.1002/cche.10574
Mahjoubi, S., Ye, F., Bao, Y., Meng, W., & Zhang, X. (2022). Identification and classification of exfoliated graphene flakes from microscopy images using a hierarchical deep convolutional neural network. arXiv preprint arXiv:2203.15252.
Mall, P. K., Singh, P. K., & Yadav, D. (2019). GLCM based feature extraction and medical X-RAY image classification using machine learning techniques. 2019 IEEE Conference on Information and Communication Technology, CICT 2019. https://doi.org/10.1109/CICT48419.2019.9066263
Matlab Crossentropy. Retrieved 24 Oct 2022, from, https://www.mathworks.com/help/deeplearning/ref/dlarray.crossentropy.html
Mordorintelligence. (2022). Flax Seeds Market Size, Outlook | Industry Trends 2022 - 27. https://www.mordorintelligence.com/industry-reports/flaxseeds-market
Nadimi, M., Brown, J. M., Morrison, J., & Paliwal, J. (2021). Examination of wheat kernels for the presence of Fusarium damage and mycotoxins using near-infrared hyperspectral imaging. Measurement: Food, 4, 100011. https://doi.org/10.1016/J.MEAFOO.2021.100011
Nadimi, M., Loewen, G., & Paliwal, J. (2022). Assessment of mechanical damage to flaxseeds using radiographic imaging and tomography. Smart Agricultural Technology, 2, 100057. https://doi.org/10.1016/j.atech.2022.100057
Nanyam, Y., Choudhary, R., Gupta, L., & Paliwal, J. (2012). A decision-fusion strategy for fruit quality inspection using hyperspectral imaging. Biosystems Engineering, 111(1), 118–125. https://doi.org/10.1016/J.BIOSYSTEMSENG.2011.11.004
Neelakantan. P. (2021). Analyzing the best machine learning algorithm for plant disease classification. Materials Today: Proceedings. https://doi.org/10.1016/J.MATPR.2021.07.358
Nowakowski, K., Boniecki, P., Tomczak, R. J., & Raba, B. (2011). Identification process of corn and barley kernel damages using neural image analysis. 8009, 75–79. https://doi.org/10.1117/12.896664
Paliwal, J., Visen, N. S., Jayas, D. S., & White, N. D. G. (2003). Cereal Grain and Dockage Identification using Machine Vision. Biosystems Engineering, 85(1), 51–57. https://doi.org/10.1016/S1537-5110(03)00034-5
Pathan, M. S., Nag, A., Pathan, M. M., & Dev, S. (2022). Analyzing the impact of feature selection on the accuracy of heart disease prediction. Healthcare Analytics, 2, 100060. https://doi.org/10.1016/J.HEALTH.2022.100060
Preetha, K., Preetha, K., & Jayanthi, D. S. K. (2018). GLCM and GLRLM based Feature Extraction Technique in Mammogram Images. International Journal of Engineering & Technology, 7(2.21), 266–270. https://doi.org/10.14419/ijet.v7i2.21.12378
Sabzi, S., Nadimi, M., Abbaspour-Gilandeh, Y., & Paliwal, J. (2022). Non-Destructive Estimation of Physicochemical Properties and Detection of Ripeness Level of Apples Using Machine Vision. International Journal of Fruit Science.
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. C. (2018). MobileNetV2: Inverted Residuals and Linear Bottlenecks. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 4510–4520. https://doi.org/10.1109/CVPR.2018.00474
Sapirstein, H. D., Neuman, M., Wright, E. H., Shwedyk, E., & Bushuk, W. (1987). An instrumental system for cereal grain classification using digital image analysis. Journal of Cereal Science, 6(1), 3–14. https://doi.org/10.1016/S0733-5210(87)80035-8
Shahbazi, F. (2011). Impact Damage to Chickpea Seeds as Affected by Moisture Content and Impact Velocity. Applied Engineering in Agriculture, 27(5), 771–775. https://doi.org/10.13031/2013.39557
Shahbazi, F., Dolatshah, A., & Valizadeh, S. (2014). Evaluation and modelling the mechanical damage to cowpea seeds under impact loading. Quality Assurance and Safety of Crops and Foods, 6(4), 453–458. https://doi.org/10.3920/QAS2012.0120
Shahbazi, F., Dowlatshah, A., & Valizadeh, S. (2012). Breakage Susceptibility of Wheat and Triticale Seeds Related to Moisture Content and Impact Energy. Cercetari Agronomice in Moldova, 45(3), 5–13. https://doi.org/10.2478/v10298-012-0051-4
Shahbazi, F., Valizade, S., & Dowlatshah, A. (2017). Mechanical damage to green and red lentil seeds. Food Science and Nutrition, 5(4), 943–947. https://doi.org/10.1002/fsn3.480
Sharma, D., & Sawant, S. D. (2017). Grain quality detection by using image processing for public distribution. Proceedings of the 2017 International Conference on Intelligent Computing and Control Systems, ICICCS 2017, 2018-Janua, 1118–1122. https://doi.org/10.1109/ICCONS.2017.8250640
Simonyan, K., & Zisserman, A. (2014). Very Deep Convolutional Networks for Large-Scale Image Recognition. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings. https://doi.org/10.48550/arxiv.1409.1556
Singh, K. R., & Chaudhury, S. (2020). Comparative analysis of texture feature extraction techniques for rice grain classification. IET Image Processing, 14(11), 2532–2540. https://doi.org/10.1049/IET-IPR.2019.1055
Sujatha, R., Chatterjee, J. M., Jhanjhi, N. Z., & Brohi, S. N. (2021). Performance of deep learning vs machine learning in plant leaf disease detection. Microprocessors and Microsystems, 80, 103615. https://doi.org/10.1016/J.MICPRO.2020.103615
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2014). Going Deeper with Convolutions. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 07–12-June, 1–9. https://doi.org/10.48550/arxiv.1409.4842
Tan, M., & Le, Q. V. (2019). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. 36th International Conference on Machine Learning, ICML 2019, 2019-June, 10691–10700. https://doi.org/10.48550/arxiv.1905.11946
Thaiyalnayaki, K., & Joseph, C. (2021). Classification of plant disease using SVM and deep learning. Materials Today: Proceedings, 47, 468–470. https://doi.org/10.1016/J.MATPR.2021.05.029
Velesaca, H. O., Suárez, P. L., Mira, R., & Sappa, A. D. (2021). Computer vision based food grain classification: A comprehensive survey. Computers and Electronics in Agriculture, 187, 106287. https://doi.org/10.1016/J.COMPAG.2021.106287
Visen, N. S., Paliwal, J., Jayas, D. S., & White, N. D. G. (2003). Image Analysis of Bulk Grain Samples Using Neural Networks. Canadian Biosystems Engineering / Le Genie Des Biosystems Au Canada, 46, 1. https://doi.org/10.13031/2013.15002
Wang, W., & Paliwal, J. (2006). Spectral Data Compression and Analyses Techniques to Discriminate Wheat Classes. Transactions of the ASABE, 49(5), 1607–1612. https://doi.org/10.13031/2013.22035
Yeung, M., Sala, E., Schönlieb, C. B., & Rundo, L. (2022). Unified Focal loss: Generalising Dice and cross entropy-based losses to handle class imbalanced medical image segmentation. Computerized Medical Imaging and Graphics, 95, 102026. https://doi.org/10.1016/j.compmedimag.2021.102026
Acknowledgements
The authors would like to thank the financial support provided by Mitacs, the Canada Foundation for Innovation (CFI), and the Natural Sciences and Engineering Council of Canada (NSERC).
Author information
Authors and Affiliations
Contributions
Mohammad Nadimi: methodology, data curation, conceptualization, project administration, investigation and writing (original draft). L.G. Divyanth: data analysis and writing (original draft). Jitendra Paliwal: funding acquisition, writing (reviewing and editing), and supervision.
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Nadimi, M., Divyanth, L.G. & Paliwal, J. Automated Detection of Mechanical Damage in Flaxseeds Using Radiographic Imaging and Machine Learning. Food Bioprocess Technol 16, 526–536 (2023). https://doi.org/10.1007/s11947-022-02939-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11947-022-02939-5