Abstract
The 21st century has witnessed a rapid convergence of manufacturing technology, computer science and information technology. This has led to a paradigm of 4.0. The hitherto known developments in metallurgical and materials practices are largely driven by application of fundamental knowledge through experiments and experiences. However, the mounting demands of high performance products and environmental security calls for the ‘right first time’ manufacturing in contrast to the traditional trial and error approach. In this context, a priori capability, for prediction and optimization of materials, process and product variables, is becoming the enabling factor. In recent time, research in material science is increasingly embarrassing the computational techniques in development of exotic materials with greater reliability and precision. The present study is aimed at exploring the computer vision and machine learning techniques in different application areas in materials science.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
It is appropriate to state that advances in materials science shape not only our daily lives but also promote growth. Materials are now inseparable with progress. This makes the search of new materials a contemporary and critical subject in material science. To accelerate the discovery and design process for new materials, various computational approaches have been introduced, in tandem with the experimental processes.
The mechanical and physical properties of materials largely depend on the grain distribution, shape and the size of the microstructure constituents. Thus, identification, classification and quantification of the microstructural constituents is important to establish the structure property correlation of a specific material.
To accelerate the manufacturing process, many industries are recently taking interest in automation by using the application of computer vision and image processing, which further enables cost-effective design of materials to achieve targeted properties [1,2,3,4,5]. Microstructure modeling can be effective in introducing automation or feedback based control of different processes such as deformation processing and heat treatment [1]. The previously mentioned approach predicts the microstructural features, including volume fraction of phases during heat treatment of steels. Earlier, optical microscopy was the only available technique for microstructure analysis. However, in recent times, various image-processing techniques have been developed to analyze the same [5]. Due to their (microstructural) recursive and consistent nature, computer vision/image processing based approaches are attracting increasing attention for the interpretation of data and analysis of microstructural images [6]. With time, computational image processing techniques are getting faster than traditional approaches with comparable reliability in understanding the microstructure [7]. An automated image processing system with pre-processed images may be immensely helpful to boost up the microstructure-process-property correlation (Fig. 1).
During the last two decades, computational approaches, based on multi-scale physical principles, have advanced rapidly. At the same time, constraints in understanding and use of the physical principles in multi-component, multiscale and multi parameter scenario has paved the path of data driven techniques, like machine learning, in modeling, simulating and optimization of the complex and robust systems. In this context, machine/deep learning techniques have gained immense popularity in recent time, in a wide range of application field like engineering, finance, business, transport etc. It is therefore, worthy to examine the application of efficient machine/deep learning techniques in exploring the materials problem of scientific and industrial interest.
In an effort to predict the phases and crystal structures in Multi Principal Element alloys (MPEA) correlations among five key features of the constituent elements has been studied.
Another exercise has demonstrated the development of an effective image analysis framework for classification of steel microstructures using deep learning methods without the need of separate segmentation and feature extraction mechanism. Finally, a computational method has been developed to classify the different process roots of steel constituted by composition and properties. This study provides a model to predict the steel processing method based on experimental data on composition, process route and properties.
In essence, the present effort is an attempt to establish the accelerated process for designing exotic materials and process by exploiting the knowledge and information hidden in the huge database available from earlier research and industrial practices.
2 Different Applications Areas of Materials Engineering
2.1 Multi-Component Alloys
The concept of multi-principal element alloys (MPEAs) is helpful in optimizing a set of properties while retaining the characteristic properties of the multi-principal elements, which makes such alloys more useful [8, 9]. The conventional metal alloys have the disadvantage that they suffer from a trade-off between strength and toughness. MPEAs are important because they have the ability to exhibit superior mechanical properties compared to conventional metal alloys. The common phases in the case of MPEAs are single-phase solid solution, amorphous, intermetallic compounds (IM), and combined SS and IM phases [10]. MPEAs exist in different phases enables to target either the strength or toughness in order to yield excellent mechanical properties. In order to design MPEAs, a large and varied composition space is available that has not been explored exhaustively to date, and thus the introduction of a rapid screening technique is necessary to select the accurate compositions to yield the best balance of properties. Therefore, phase selection is an important analysis of the pathway to design new MPEAs for various purposes and applications. The challenge is the obscurity in the mechanism of formation of different phases in MPEAs, making it difficult to predict the phase selection in a MPEA [10,11,12,13,14,15]. The conventional method of phase selection is mostly parametric which then leads to the empirical rules for the same. For example, the mixing enthalpy (ΔHmix) and atomic radius difference (δ) is supposed to be in certain ranges (− 15 < ΔHmix< 5 kJ mol − 1; 1% < δ < 5%) suitable for the formation of an SS phase. These parametric approaches extends the Hume-Rother rule, which states that the formation of binary alloy, with SS is affected by the size, crystal structure, valency and electronegativity [15,16,17,18,19,20,21,22,23,24,25]. The fact that the phase selection depends on more than three parameters, limits the predictive ability by the parametric approaches because such selections cannot be visualized manually because of its complexity and size variations (Table 1).
Earlier, Raghavan et al. [16] have analyzed the phase formation in multi-component alloys. An attempt has been made to forecast phase formation using a CALPHAD-based approach for a wide range of compositions. The stable phase is supposed to be the initial phase, which is formed after cooling from the liquid state with the highest driving force.
Sheng Guo, et al. [14] have studied high mixing entropy and found that this was not the only factor that controls the solid solution formation in equiatomic multi-component alloys. Other determining factors are mixing enthalpy (ΔHmix), atomic size difference (δ) and mixing entropy etc.
Lilensten et al. [17] suggested a detailed analysis of the deformation mechanism of a quaternary BCC-MPEA at room temperature. To achieve reproducible results, all the analyses were performed with recrystallized microstructure.
Islam et al. [26] have studied the correlations between five features corresponding to valence electron concentration (VEC), difference in the Pauling negativities (Δχ), atomic size difference (δ), mixing enthalpy (ΔHmix), and mixing entropy (ΔSmix) that lead to the phase selection in a dataset with 118 data of MPEAs using artificial neural networks.
Huang et al. [27] have studied the phase prediction from high entropy alloys. Using machine learning this group presents an alternative path to discover the phases from new HEAs. However, reliable criteria for depicting the evolution of particular phase in a given compositional space is still awaited.
2.2 Identification and Quantification of Phases in Microstructures of Steel
In the endeavor of quantitative correlation of microstructure and property of steels, attempts has been made to employ image processing technique to identify and quantify the phases in steel microstructures.
Kesireddy et al. [28] have investigated the effectiveness of training a neural network to recognize the phases like pearlite, ferrite, martensite, and cementite using digital image processing. This model is useful for the phase segmentation, but quantitate analysis of phases was not attempted in their study.
Banerjee et al. [29] proposed a novel scheme for automatic extraction of the phases from microscopic image of dual phase steel. They have used various image processing techniques such as thresholding and edge detection along with Olysia software. However, segmentation of noisy images has not been emphasized in their approach.
Gupta et al. [30] have presented the processing and refinement of steel microstructure images for assisting the process of computerized heat treatment of plain carbon steel. The proposed refinement of steel microstructure images is aimed to enable computer-aided simulations of heat treatment of plain carbon steel, in time and cost-efficient manner; hence beneficial for the materials and metallurgical industry [31].
In one of the attempt of grain boundary detection, Alysson et al. have proposed a model using image processing techniques to determine average grain size [32]. Nowadays, a high-end computational approach has been introduced, called deep learning, to overcome the complex problems of the traditional approach in a faster and more accurate way [33].
Deep learning based grain boundary segmentation approach for steel microstructure has earlier been discussed in literature [34]. Decost et al. [35] have conducted a study using deep learning on high throughput quantitative metallography for complex microstructures having twenty-four ultra-high carbon steel images.
However, the inherent limitation is that it is limited to empirical methods and is not suitable to be utilised in classifying various phases in steel having two or more than two phases [36]. Any effort towards recognizing phases in multiphase steel relies on morphological or crystallographic properties [37,38,39,40,41,42,43]. A method provided by Pauly et al. employed data mining techniques; they have proposed a technique for extracting morphological features and a feature classification step using support vector machine [44, 45]. This method was applied on a chemically etched micrographic dataset of steel collected by scanning electron and optical micrographs. Though the method yieled reasonably realiable results, it could achive only 50% accuracy in classifying microstructures. this might be due to high complexity of substructures (Fig. 2).
In an attempt to overcome the limitation of conventional image processing and analysis techniques, deep learning has gained significant attention in object classification and image segmentation for different applications using AlexNet [46, 47]. There are other convolution neural network (CNN) as well, which work better than AlexNet, e.g., VGGNet, ResNet having more layers than AlexNet are capable to achieve better accuracy [48]. When it comes to the task of segmentation, a tweaked version of CNNs, proposed by Long et al. [49], is used like fully convolution neural network (FCNN), to employ classification using semantic segmentation. Currently, FCNNs are the popular trend with consistant efforts on devising ways to extend it and approach towards the higher benchmarks of image segmentation [50,51,52] (Table 2).
2.3 Composition-Process-Property Correlation
The determination of material processing method is of great significance with respect to performance of steel as variation in processing route even for a specific grade of steel causes different microstructures, which in turn is capable of influencing properties [54]. During the manufacturing process, materials and process parameters are to be controlled and hence, these are ideally desired to be input variables for determining the process. In the present scenario, a large number of research initiatives are being undertaken to predict steel processing mechanism, mechanical properties of steel using various computational methodologies, especially based on artificial neural network, pattern recognition etc. [55,56,57,58,59,60,61,62,63,64].
In 2009, Brahme and Winning have designed an artificial neural-network based prediction model of cold rolling textures from steel which is used to predict fiber texture using texture intensities, carbon content, carbide and amount of rolling reduction [65].
Similarly, Simecek and Hajduk have developed MECHP tool to predict mechanical properties of hot rolled steel product, which measured process data like water-cooling and subsequent air-cooling of hot rolled narrow plate and wire [66].
Zhi Xu et al. [67] have presented a study using CNN model with the optimal structure of metallurgical phenomena in the steel rolling processes is described.
3 Machine Learning Techniques
Metallurgical research and industrial activities generate a huge volume of database, which may be effectively utilized for extracting quantitative and qualitative knowledge. It may be noted that during the last few decades, metallurgical processes are increasingly utilizing the computational techniques. While the concept of multi-principal element alloys is becoming popular, it is essential to develop trustable tools, which can perform a reliable prediction of the various phases those evolves in MPEAs depending on various parameters like mixing entropy, mixing enthalpy, etc.
Microstructural image analysis has also been found to be effective to delineate its phases, which may be helpful in identifying the volume fraction, area fraction, grain boundary detection etc. In this endeavor, digital image processing has been widely used in the case of metallographic images. However, in recent times, the deep learning concept is widely used for faster calculations.
Similarly, the selection of various processing routes for manufacturing steels i.e. hot rolling, cold drawing, annealing, spheroidizing also directly/indirectly decided by the compositions (e.g. C, Mn, P etc.) and mechanical properties (e.g. yield strength, tensile strength, elongation, etc.). Hence, a reliable model based on composition-process-property correlation may be useful to predict the appropriate process schedule for achieving the target properties for a given compositional space.
3.1 Machine Learning Based Prediction of Phases and Crystal Structure in Multi-Components Alloys
3.1.1 Description of Computational Scheme
The current study deals with the selection of phases from MPEAs and the crystal structure prediction if solid solutions. The steps involves are as follows:
-
i.
Collection of datasets from various literature.
-
ii.
Splitting of the dataset into two parts, for training and testing purpose.
-
iii.
Selection of different classifier to select the best option.
-
iv.
Training the model with training data to learning the trends of the data sample.
-
v.
Based on the training, testing of the model performance
-
vi.
Finally, calculation of accuracy with the help of confusion matrix.
3.1.2 Description of Computational Tools (Machine Learning)
Machine learning (ML) finds its origin from the late 80 s as an important tool for optimization. Derived from artificial intelligence (AI) in 1960 as an ally for expert systems. The branch of machine learning finds its notable achievements from various applications like speech and word recognition system [68], autonomous car driving systems [69], backgammon playing etc. Recent studies indicates that machine learning is a major innovative driving force, which will gain impetus for a technological revolution in the coming decades [70]. Supervised machine learning is a branch of machine learning that is used for performing classification and regression tasks on labeled data [71, 72]. The machine learning algorithms learn from examples in the way animals learn. The machines are conditioned with virtual rewards in place of treatments. The virtual rewards are given when the machine makes a correct decision and vice versa. A simple rule has to be formulated by machine learning program that explains the functionality at its best and checks for various functions. The rewards are usually given when the expiations are valid and concrete for the given data. Machine learning algorithms deal with input and output spaces in the form of derivations and conclusions. The reliability of induction increases with number of the inputs that can be formulated for mathematical frameworks [73]. The machine learning algorithms automatically learn from automatic parameter adjustment done by data as per the given inputs. For high dimensional inputs, machine learning has been framed to be powerful and beneficial to eliminate inefficiency created by manual programming [74]. Recently, the concept of machine learning has gained reasonable acceptance in the domain in the area of physical sciences and material science. However, the applications of machine learning/deep learning is still limited with respect to material informatics [75].
For a machine learning algorithm, given is a training set of (xi,yi) where i is in a range from 1 to N, x is a d-dimensional variable that is given as input to function (say g) which maps x as input to y as output. The algorithms are then tested to check whether the function has correctly mapped x to y. The function (g) yielded by x should correctly reproduce the examples of testing set, different from the training set. There are three sub-divisional components of every machine-learning algorithm namely representation, evaluation, and optimization [76].
The purpose to use machine learning algorithm is to analysis the high volume of databases. Sometimes if the database size is smaller, the model might suffer from overfitting problem. An overfitting problem results from high training accuracy but very low testing accuracy. The overfitting analysis in terms of bias variance tradeoff is to be conducted to capture the data noise or “hallucinate patterns” [77]. A simple set of hypothesis results in algorithm yielding small variance but high bias [78]. This is the commonly followed principle that guides machine learning and can be used for quantitative methods called regularization [79]. Thus, it is guaranteed that if the hypothesis set a true function, the probability that machine learning algorithm returns poor hypothesis with reduced set of training data is high.
From the Fig. 3 it is observed that traditional models showing higher error rate for large datasets, which can be attributed to as high bias. The training set error converges to testing set quickly, where Ein is error in training set and Eout is error in testing set. This causes low variance. Complex models on the other hand, have a small error for large datasets (small bias), but their error on the training set Ein is meaningless for too few data points (high variance), and slowly converges towards the error on the testing set Eout for large datasets. In this study several machine learning model such as naïve bayes, support vector machine, K- nearest neighbor, decision tree and random forest, have been employed as are presented in the following sections.
Naïve Bayes Naïve Bayes (NB) classifier is a straightforward probabilistic classifier in view of applying Bayes’ [72, 80, 81]. These is a presumption-based classifier and these presumptions make the calculation of Bayesian order approach more productive, however this suspicion seriously restricts its appropriateness. The NB classifiers can be prepared very productively by using a moderately limited quantity of information to assess those parameters, which are vital for characterization. Since independent factors (variables) are assumed, as the differences of the factors for each class need to be decided and not the whole covariance grid. The advantages of the Bayes classifier is that it requires a limited quantity of training information to estimate parameters for classification [82]. In essence, of the general classifier is powerful enough to disregard genuine lacks in its fundamental of likelihood model. A naïve bayes model can be formulated as
where \( P (c_{i} |D) \) and \( P(D|c_{1} ) \) are the conditional or posterior probabilities and \( P\left( {c_{i} } \right) \) and \( P\left( D \right) \) are prior probabilities.
Support Vector Machine Support vector machines (SVMs) are the most up to date supervised machine learning strategy [83,84,85]. Working principle of SVM with an edge of training set invokes either side of a hyperplane (A hyperplane in an n-dimensional Euclidean space that divides the space into two disconnected parts) that isolates two information classes. Expanding the edge along these lines make the biggest conceivable distance between the isolating hyperplane, on either side, it has been demonstrated to diminish an upper bound on the normal speculation error. If the training set are linearly separable, then for W weight vector with bias b, (W,b) expressed as [86];
For linearly separable points, once the ideal isolating hyperplane is discovered, data points that lie on its edge are known as support vector points and the arrangement is known as a linear combination of just these points (see Fig. 4), other information focuses are overlooked.
In soft margin, the inclusion of slack variable if the points are not linearly separable, then, w and b represented as
K Nearest Neighbor K- nearest neighbor (K-NN) classifier is a nonparametric classifier that decides the class label of x#, dependent on the supposition with the equivalent class, found in closeness to one another, when a predictable proximity is utilized [72, 87,88,89]. This means that the class label of x# would be the same for the class label shared by its nearest neighbor say xi*. For a given distance matrix, e.g., in Euclidean distance d can be calculated as:
where, \( \left| {|x_{i}^{*} - x^{\# } } \right||_{2} , \) is l2 norm and is the number of instances. In general, the value of k depends on taking the majority vote of class labels among the k-nearest neighbors and weight the vote according to distance weight factor, w = 1/d2
Decision Tree Decision trees (DT) are based on underlying principles of building nonlinear decision frontiers with the help of linear separators that can be expressed in terms of hyperplanes [90, 91]. Considering the labelled data set (xn, yn),with n in the range from 1 to N. Figure 5 shows the example of the hyperplanes which explains the case of x having only two coordinates x1 and x2. The data labels re represented by colors, which are the associated values of y. The objective of this algorithm is separation of data points on the basis of labels. The example takes into consideration the finite value of labels, which is red and blue. Thus, the classification tasks just has two labels against values of x to be classified according to y.
The machine learning function is derived from the combination of all learned hyperplanes [92, 93]. Trees can be used to represent the constant piecewise function. Each node in the tree is associated with hyperplane (as shown in Fig. 5). Thus for all xn∈ ℝL, the function can be expressed as:
where Kl is a subset of hyperplanes that are orthogonal to the canonical basis (i.e. one of the “boxes”), and K1…KLis a partition of ℝL. al represents the value of the label attributed to the “box” Kl.
The classification problem determines a1 by majority votes in K1. If consider the previous example, considering the box, and if there were more points with blue label as compared to red labels, the value associated with a will be blue. The label blue shall be assigned to any testing data point that shall fall within this box (hyperplane). Taking the sum of the K, total number of classes are expressed as:
For defining the regression problem a1 is determined by the empirical mean of the points in K1:
Identification of points to split the axes for building, the hyperplanes is the most challenging task while building trees. The identification of split is recursive, which is one node after other. The best binary split has to be identified at each given node among a set of all possible given splits ti,τ,. Here i corresponds to the ith axis of x and τ corresponds to point of split. Values of τ can be chosen in different ways such as through histograms and regularly spaced points. A local loss function L is optimized to choose the best split within the given set. The loss function is calculated at each node and the entry parameters vary accordingly. Thus, the loss function turns out to be local such that at each node, for each split the current dataset S shall be split into two subsets left and right. The subsets are used for evaluation of local loss and split selection. The left and right subset selected at the previous nodes shall correspond to the next node for a given dataset S. The procedure is iterative, and it ends when the criteria required is reached, that is either the maximum depth of tree arrives, or the maximum number of leaves are covered. For the classification problems, the split choices involves minimization of the loss function derived from the impurity criterion G. The value of G can be the Ginni Index. Taking into consideration the data set S, that has k number of classes, G(S) is represented as:
The loss function for the corresponding Equation is as below:
The given Equation has two subsets, left and right, separated by the binary split. There cardinals are ti,τ, and Nleft and Nright. The axis and τ chosen at a given node are (ˆi,τˆ) = arg min i,τ L (ti,τ,S).
Random Forest The random forest (RF) aims to reduce the problem of variance in decision trees by adding randomness to the construction of trees such that multiple trees are created at the same point of time [94]. The randomized construction is then averaged for the decision making. Randomness in creation of trees is based on consideration of two sources bagging (or boot strapping) and selecting only subset of the axes (or hyperplanes).
Bagging considers the growth of each tree by using only the subset of original training data set. Secondly, by selecting the only subset of hyperplane, while growing the tree, limits the split to smaller sets of axes. The small sets of axes are candidates, and represented by F, in some notations. The fact that variance of the random forest estimator is less than that of single decision tree can be easily proved from implementation [95, 96]. Generally, the random forest remain faster, as compared to several other machine-learning algorithms. In addition, the results derived from random forest classifiers are more accurate and thus they yield very good performance by tuning the hyper parameters.
3.2 Classification of Steel Micrographs using Deep Learning
3.2.1 Description of Dataset
In this work, 959 carbon steel microstructure images are used for training and testing purposes, which is taken from literature [97]. These microstructures are having different primary micro constituent phases such as martensite (M), pearlite (P), spheroidite (S) (precipitate), pearlite and spheroidite (P + S), pearlite and widmanstatten (P + W), spheroidite and widmanstatten (S + W). Table 3 indicates the variation in the dataset.
3.2.2 Description of Computational Scheme
The aim of this work is to propose a rational and effective strategy for classifying constituent phases present in a steel microstructure through machine intelligence. The steps involves are as follows:
-
i.
Collection of sample images: Input images are collected from previous literature.
-
ii.
Preprocessing of those images such denoising.
-
iii.
Selection of source model: A pre-trained source model residual network is chosen from available models.
-
iv.
Reusage of model: The pre-trained model can then be used as the starting point for a model on the second task of interest. This may involve using all or parts of the model, depending on the modeling technique used.
-
v.
Tuning the model: Optionally, the model may need to be adapted or refined on the input–output pair data available for the task of interest. In this work, one more layer added to the model for classification purpose.
3.2.3 Description of Computational Tools (Deep Learning)
Deep learning is the study of neural network (NN) and “end to end” learning mechanism and these mechanisms are capable of understanding the complex features from input data and self-process those data to learn the model. Unlike traditional approaches, deep learning enables feature extraction and classifications of data process simultaneously.
Convolutional Neural Network Convolutional neural network is the deep neural networks that are primarily used to classify images, cluster them by similarity, and then perform identification by its self-learning mechanism [34, 98,99,100]. CNN is widely used in various computational recognition applications like face recognition, medical image identification, transportation, etc. Unlike artificial neural networks, where the input is a vector, here the input is a multi-channeled image. The Typical CNN comprises stacked layered components named convolution, pooling, flattening and full connection as shown in Fig. 6.
Transfer Learning Human beings can naturally pass information through various activities. When learning about one task, “what we gain” is the knowledge to solve related tasks in the same way. The more connected are the activities, the better our information can be passed and cross-used. There would be some simple examples like, skill to ride a motorbike, and learn how to ride a similar vehicle. Convolutional neural networks require a relatively large amount of data to learn features from images and use that for the classification. If this requirement of a large dataset isn’t fulfilled, the framework faces the problem with generalization, and this leads to the issue of overfitting. Recently, CNNs are being used for a wide and varied number of applications. A challenge faced by the studies involving CNNs, is the lack of a large enough labeled dataset. This severely hampers the classification accuracy of the model. To combat this issue, data augmentation techniques like generative adversarial networks (GANs) have been used to create new images from previously available images [101]. Generally, GANs require a large number of training examples to learn the complexities of the dataset and generate valid images, but some recent work has shown the effectiveness of GANs, when trained with a small amount of data [102]. Other techniques like semi-supervised learning have also been used in this respect, which uses a combination of labeled and unlabeled data for the learning process [103]. These methods along with other data augmentation methods like affine transformation have several drawbacks. They either carry with them the computation overhead of generating the images (in case of GANs) or to perform the transformations. Moreover, there is an assumption in these methods, that the distribution and the complexities of the labelled and unlabeled data is the same. However, this is not the case in most real-life applications. Transfer learning is the ability of a system to recognize and apply knowledge and skills learned in previous tasks to novel tasks or new domains, which share some commonality [104, 105]. Transfer learning is most commonly used methodology to train models and it can be fine-tuned with used dataset to produce appropriate results. Hence, it can be easily reused to fit to problem of classifying images. Transfer learning, is an optimization technique for saving time and having better performance.
Given a source domain \( D_{S} \) and learning task \( T_{S} \), a target domain \( D_{T} \) and learning task \( T_{T} \), transfer learning aims to help improve the learning of the target predictive function \( \varOmega_{T} \) in \( D_{T} \) using the knowledge in \( D_{S} \) and \( T_{S} \), where \( D_{S} \ne D_{T} \), or \( T_{S} \ne T_{T} \) [37]. Here, domain \( D \) is defined as a set containing two components, the feature space, \( X \) and the probability distribution of the features in this space, \( P\left( X \right) \). While the task \( T \) is a set containing two components, the label space \( Y \), and the predictive function \( \varOmega \). A pre-trained model would be the Res-Net dataset, which has a huge number of pictures relating to various classifications. Figure 7 shows a schematic representation of working framework of transfer learning model.
Deep Transfer Learning Strategies Deep transfer learning strategies has empowered to handle complex issues and yield reliable outcomes. In any case, the preparation of training set time and the measure of information required for such transfer learning frameworks are substantially more than that of conventional ML frameworks. Given a target task, identification of the commonality between the new task and previous (source) tasks, and transfer knowledge from the previous tasks to the target one, are carried out by various pre trained network such as Image Net [106], Alex Net [107], VGG 16 [108] etc. Transfer learning on the other hand neither requires the computation overhead, nor assumes any similarity between the complexities and domains of the training and testing datasets. A schematic diagram of pre trained network is shown in Fig. 8. Transfer learning extends the analogy that neural network-based classifiers make with the human mind. These networks have been trained on the ImageNet dataset.
The ImageNet dataset is composed of over 14 million images with 1000 classes. The lower layers of the network are responsible for extracting the lower level features of an image like shapes, curves, and lines. The higher levels, on the other hand, extract and learn more complex features of the image. Most of the features learned in these layers are common across many computer vision problems; therefore, there is no need to learn them from the scratch. The models like VGG, ImageNet, and ResNet have been trained on the vast ImageNet dataset. To learn the intricacies of the target dataset, training only the upper layers of these models is required. The lower layers are frozen and the weights are not varied. Finally, a fully connected dense layer is added to better understand the specificity of the target dataset.
Residual Network (ResNet-18) One of the most popular pre trained network is residual network (ResNet 18). ResNet is a pre-trained model for better accuracy and improvement compared to normal plain network as it utilizes skip connections [109, 110]. The skip connections are used to skip a network layer to decrease the computation complexity and to increase the accuracy, as the network gets deeper. Figure 9(b) shows how a layer is skipped by adding the output of previous layer to the output of next layer and therefore, skipping the current layer.
3.3 Computer Vision Approaches for Phase Segmentation from Steel Micrograph
3.3.1 Description of Computational Tools
Digital Image Digital image processing is a branch of computerized methods that is gaining rapid popularity in the study of metallographic images [7, 111]. An image is two dimensional (i.e. two coordinates x and y in a plane) light intensity based function represented as f (x,y) [112, 113]. The value of the function f value may vary with the image brightness and or gray level of the matrix element. Such matric element referred as pixels. The digital images usually are processed through following techniques
Image Pre-Processing Preparing an image for improvement of image data for further analysis is termed as pre-processing. The original images are essentially color image (red–green–blue (RGB) converted to grayscale having 16 levels or binary scale whatever is applicable.
Noise Reduction Noise reduction is an important task in image pre-processing steps in computer vision and several techniques have been explored for solving the noise reduction from images. The solutions for noise reduction can be explored in two different ways namely, linear methods and nonlinear methods. Amongst these, two of the nonlinear methods are most popular because their behavior is well suited for the human visual system (HVS). They are adaptable to certain special noises that are impulsive and multiplicative in nature and thus are difficult to remove by linear methods [114].
The approaches are defined by two principles; one is nonlocal means technique and the other is sparseness of data [115]. In a non-local means algorithm, the mean value of all the pixels is calculated and weighted by considering similarity of these pixels with the target pixel. As a result, the post-filter process provide clearer and fewer loss details from the input image as compared to the local means algorithm. In an image, many pixels are having the same values, known as self-similarity pixels [116]. In Fig. 10, q1, q2, and q3 are three-neighbor pixels with respect to p.
In Fig. 10, most of the pixels in the vicinity of p will have properties similar to p’s neighborhood. Such self-similarity properties can be used for de-noising of the image. A pixel with similar neighborhoods can determine the de-noised values of a pixel, which is the working principle of non-local means (NL-means) de-noising algorithm. The following expression is used to get NL-means
where V is the noisy image, w(p,q) are pixel weights that must satisfy two conditions \( 0 \le w\left( {p,q} \right) \le q \) and \( \mathop \sum \limits_{q} w\left( {p,q} \right) = 1 \). From the average of all pixels, each pixel’s weight is calculated.
Edge Detections Along with noise detection, edge detection also grabs attention in dealing with image processing. Vaious method are used in edge detection and these methods are involved in statistical calculations, differentiation, machine learning, active contouring, multi-scaling, and anisotropic diffusion [117]. Anisotropic methods use morphological edge detectors on sparse representation of image data and are considered as State-of-art methods for edge detection [118]. Various neural network models inspired by nature have been proposed [119] along with several machine-learning approaches to find solutions [120]. Multi-fractal methods [121] and Markov models [122] are also the effective techniques proposed for detecting edges on images. Different working principles are available for image segmentation. The principles can be broadly categorized into a) traditional methods; normalized cut methods (NCM), efficient graph-based methods (EG), mean shift (MS), level set (LS), ratio contour (RC) and b) soft computing techniques. Traditional methods use thresholding, morphological methods, and edge-based segmentation. Soft computing is based on techniques like fuzzy theory, artificial newral network
(ANN), and genetic algorithm (GA). The adaptive accuracy and adaptability of soft computing techniques make them the most widely adopted methods. Few other methods are state transition algorithms, spanning tree-based methods [123] fuzzy logic-based techniques [124].
A hybrid of machine learning and markov model [125] are also available and implemented for image segmentation. Pixel-based method [126] is a segmentation technique that starts by initializing the seed points that represent the region Ns iterative includes other pixels to individual regions. The division follows greyscale properties for segmentation. The available pool of literatures provides numerous techniques for both edge detection and segmentation [127]. Sobel edge detector follows the gradient-based method which is based on 1st order derivatives. It calculates the first-order derivatives from the image, separately for the x and y-axis. After rotation of one kernel by 90°, another kernel is formed. Consider X is an input image and Gx and Gy are separate measurements of the gradient component in each orientation, the operator uses two 3 × 3 convolution kernel.
Here X axis denotes increasing value in the right direction, where Y axis denotes increasing value in the down direction. The resultant gradient magnitude can be measure by:
In addition, gradient direction calculation as given below
It is observed that Sobel edge detection takes quite less computational time along with one more important feature in Sobel operator i.e., it is very simple as it uses the first derivative filter over Laplacian of Gaussian (LoG) and canny edge detection algorithm. Hence, the chance of feature loss during edge detection is less. A step-wise image processing techniques applied on metallographic images is shown in Fig. 11.
4 Conclusion
In this study, some application areas in metallurgical practices, those can be solved by using computer vision and machine learning techniques are studied and explored. With the rapid evolution of integrated computational materials engineering (ICME), machine learning has been widely applied for better design of the available alloys and discovery of new alloys to achieve better technological performance.
In the first part, the multi principal element alloys are explored on the basis of understanding of the experimental results reported in various literatures. In order to design MPEAs, a large and varied composition space is available that is yet to be explored exhaustively as of now and thus the inclusion of a rapid screening technique is necessary to select the target combination of phases for a given compositional space. Hence, various classification techniques are discussed to get better classification accuracy from the MPEA dataset.
Classification of steel microstructures has been discussed, based on the primary constituent phases, in order to provide salient feature of that microstructure. This work demonstrates the feasibility of an effective steel microstructures classification scheme using deep learning methods without the need of separate segmentation and feature extraction mechanism allowing to handle complex microstructures with higher level of noise. For this purpose, a pixel wise microstructural image segmentation using a pre-trained residual network using transfer learning is developed. The findings of the present theoretical framework based on transfer learning has been found to be significant and effective step towards qualitative as well as quantitative interpretation of steel microstructure. A data driven model has been developed which is capable to conduct the qualitative and quantitative analysis of microstructures of plain carbon steel.
A computational method has been developed to classify the different process routes of steel processing based on composition-process-property correlation. Eventually the analysis provide a correlation among the constituent elements and process parameter. This study provides a model to predicts the steel processing method based on experimental results available in literatures.
The present work can be extended to the benefit of metallurgical industries by helping in quantitative and qualitative analysis of phase evolution and thereby prediction of properties based on microstructure-property correlations. Improved accuracy of the proposed method can be achieved by increasing the size of the data set and there is adequate scope of optimization of complexity necessary for obtaining accurate performance and reducing computational time.
This work will also be directly/indirectly helpful to estimate physical and mechanical properties based on the structure property correlation. The study predicts the steel processing method based on experimental data and the same work can be extended to predict properties of different materials by considering their composition and process routes, if enough data sets are available.
In summary, a simple attempt has been made in the present study to highlight the encouraging convergence among the principles of material science, computing techniques and informatics, which has been identified as the driver for Industry 4.0. It is reasonable to expect that, the approach address in the present study will come up, in the future research, particularly in the domain of materials informatics.
References
Reddy VK, Halder C, Pal S (2016) Influence of carbon equivalent content on phase transformation during inter–critical heating of dual phase steels using discrete micro-scale cellular automata model. Trans Indian Inst Met 70(4):909–915
Samuels LE (1999) Light microscopy of carbon steels. ASM International, Cleveland
Schwartz AJ, Kumar M, Adams BL (2000) Electron backscatter diffraction in materials science. Kluwer Academic/Plenum Publishers, New York
Krauss G (2015) Steels: processing, structure, and performance, vol 2. ASM International, Cleveland
Rekha S, Raja VKB (2017) Review on microstructure analysis of metals and alloys using image analysis techniques. In: IOP conference series: materials science engineering, pp. 197–202
Kesireddy A, McCaslin S (2015) Application of image processing techniques to the identification of phases in steel metallographic specimens. In: Elleithy K, Sobh T (eds) New trends in networking, computing, e-learning, systems sciences and engineering. Lecture notes in electrical engineering. Springer, Cham, p 312
Latala Z, Wojnar L (2001) Computer-aided versus manual grain size assessment in a single phase material. In: STERMAT 2000: stereology and image analysis in materials science
Gludovatz B, Hohenwarter A, Thurston K et al (2016) Exceptional damage-tolerance of a medium-entropy alloy CrCoNi at cryogenic temperatures. Nat Commun 7:10602
Gale W, Totemeier T (2003) Smithells metals reference book. Elsevier, Amsterdam
Miracle DB, Senkova ON (2017) A critical review of high entropy alloys and related concepts. Acta Mater 122:448–511
Murty J, Yeh S Ranganathan (2014) High entropy alloys, 1st edn. Butterworth-Heinemann, Boston
Yang X, Zhang Y (2012) Prediction of high-entropy stabilized solid-solution in multi-component alloys. Mater Chem Phys 132:233–238
Guo S (2015) Phase selection rules for cast high entropy alloys: an overview. Mater Sci Technol 31:1223–1230
Sheng G, Liu CT (2011) Phase stability in high entropy alloys: formation of solid-solution phase or amorphous phase. Prog Nat Sci 21:433–446
Poletti MG, Battezzati L (2014) Electronic and thermodynamic criteria for the occurrence of high entropy alloys in metallic systems. Acta Mater 75:297–306
Raghavan R, Hari Kumar KC, Murty BS (2012) Analysis of phase formation in multi-component alloys. J Alloys Compd 544:152–158
Lilensten L, Couzinié JP, Perrière L et al (2018) Study of a BCC multi-principal element alloy: tensile and simple shear properties and underlying deformation mechanisms. Acta Mater 142:131–141
Yeh JW (2016) Recent progress in high-entropy alloys. Annales de chimie science des materiaux 31:633–648
Lu ZP, Wang H, Chen MW et al (2015) An Assessment on the future development of high-entropy alloys: summary from a recent workshop. Intermetallics 66:67–76
Zhang Y, Zuo T, Tang Z et al (2014) Microstructures and properties of high-entropy alloys. Progress in Material Science 61:1–93
Senkov ON, Miller JD, Miracle DB et al (2015) Accelerated exploration of multi-principal element alloys with solid solution phases. Nat Commun 65:1–10
Pickering EJ, Jones NG (2016) High-entropy alloys: a critical assessment of their founding principles and future prospects. Int J Mater Rev 61:183–202
Zhou YJ, Zhang Y, Wang YL et al (2007) Solid solution alloys of AlCoCrFeNiTix with excellent room-temperature mechanical properties. Appl Phys Lett 90:181–904
Ye YF, Wang Q, Lu J et al (2015) High-entropy alloy: challenges and Prospects. Mater Today 19:349–362
Juan YF, Li J et al (2019) Modified criterions for phase prediction in the multi-component laser-clad coatings and investigations into microstructural evolution/wear resistance of FeCrCoNiAlMox laser-clad coatings. Appl Surf Sci 465:700–714
Islam N, Huang W, Zhuang HL (2018) Machine learning for phase selection in multi-principal element alloys. Comput Mater Sci 150:230–235
Huang W, Houlong PM, Zhuang L (2019) Machine-learning phase prediction of high-entropy alloys. Acta Mater 169:225–236
Kesireddy A, McCaslin S (2015) Application of image processing techniques to the identification of phases in steel metallographic specimens. In: Elleithy K., Sobh T. (eds) New Trends in Networking, Computing, E-learning, Systems Sciences, and Engineering. Lecture Notes in Electrical Engineering, Vol. 312, pp. 425-430
Banerjee S, Ghosh SK, Datta S et al (2013) Segmentation of dual phase steel micrograph: an automated approach. Measurement 46:2435–2440
Gupta S, Panda A, Naskar R et al (2017) Processing and refinement of steel microstructure images for assisting in computerized heat treatment of plain carbon steel. J Electron Imaging 26:063010
Dutta T, Banerjee S, Saha SK (2017), Noise removal and image segmentation in micrographs of ferrite-martensite dual-phase steel. In: Asia-pacific engineering and technology conference, pp. 638–646
Alysson ND, Eduardo AH, Fernandes, et al., (2005) Grain size measurement by image analysis: an application in the ceramic and in the metallic Industries. In: 18th international congress of mechanical engineering, Ouro Preto, pp. 1–7
LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444
Azimi SM, Britz D, Engstler M et al (2018) advanced steel microstructural classification by deep learning methods. Sci Rep 8:2128
Decost BL, Holm EA (2015) A computer vision approach for automated analysis and classification of microstructural image data. Comput Mater Sci 110:126–133
E. Beraha, B. Shpigler, Color metallography. Am. Soc. for Met. (1977). Shrestha, S. L. et al. “An Automated Method of Quantifying Ferrite Microstructures Using Electron Backscatter Diffraction (EBSD) Data”, Ultramicroscopy Journal, Vol. 137, (2014) pp. 40–47
H. Bhadeshia, R. Honeycombe, (2006), “Steels: Microstructure and Properties”, Elsevier Ltd
Gerdemann F (2010) Bainite in medium carbon steels. Verlag J, Shak
Friel J (2000) Practical guide to image analysis. In: ASM International the materials information society
Ohser J, Muecklich F (2000) Statistical analysis of microstructures in materials science. Wiley, Hoboken
Britz D, Webel J, Schneider A (2017) Identifying and quantifying microstructures in low-alloyed steels: a correlative approach. Metall Italiana 3:5–10
Britz D, Hegetschweiler A, Roberts M et al (2016) reproducible surface contrasting and orientation correlation of low carbon steels by time resolved beraha color etching. Mater Performance Charact 5:553–563
Masci J, Meier U, Ciresan D, et al (2012) Steel defect classification with max-pooling convolutional neural networks. In: Proc. Int. Jt. Conf. Neural Networks
Pauly J, Britz D, M¨ucklich F (2016) advanced microstructure classification using data mining methods. In: TMP
Drucker H, Burges C, Kaufman L, et al (1996) Support vector regression machines. In: Neural information processing systems (NIPS)
Krizhevsky A, Ilya S, Hinton G (2012) Imagenet classification with deep convolutional neural networks. In: Neural information processing systems (NIPS)
Deng J et al (2009) Imagenet: a large-scale hierarchical image database. In: IEEE conference on computer vision and pattern recognition (CVPR)
Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: International conference on learning representations (ICLR)
Long J, Shelhamer E, Darrel T (2015) Fully convolutional networks for semantic segmentation. In: IEEE conference on computer vision and pattern recognition (CVPR)
M. Everingham, L. Van Gool, C. K. Williams, (2012), The PASCAL Visual Object Classes Challenge (VOC2012) Results
Cordts M, et al (2016) The cityscapes dataset for semantic urban scene understanding. In: IEEE conference on computer vision and pattern recognition (CVPR)
Macenko M, Niethammer M, Marron JS (2009) A method for normalizing histology slides for quantitative analysis. In: Biomedical imaging: from nano to macro, ISBI’09, IEEE international symposium on, IEEE, pp. 1107–1110
Choudhury A, Naskar R, BasuMallick A, Pal S (2019) Computer vision approach for phase identification from steel microstructure. J Eng Comput Emerald Insight 36(6):1913–1932
Parkins RN, Elices M, Sánchez-Gálvez V et al (1982) Environment Sensitive Cracking of Pre-stressing Steels. J Corrosion Sci 22:379–405
S. K. Das, S. Kumari, (2010), “A Multi-input Multi-output Neural Network Model to Characterize Mechanical Properties of Strip Rolled High Strength Low Alloy (HSLA) Steel”, Proceedings of the International Conference on Modelling and Simulation, pp. 23-25
Ward L, Agrawal A, Choudhary A (2016) A general-purpose machine learning framework for predicting properties of inorganic materials. NPJ Comput Mater 2:16028
Piekarska W, Króliszewska DG (2017) Analytical Methods of Predicting the Structure and Mechanical Properties of High Tensile Strength Steel. Procedia Engineering 177:92–98
Y. Weng, Y. Zhao, G. Tang, et al., (2013) Prediction of the mechanical properties of hot-rolled c-mn steels by single index model. In 8th international conference on computer science & education, Colombo, pp. 275–280
R. Ramprasad, R. Batra, G. Pilania, et al, (2017), “Machine Learning in Materials Informatics, Recent Applications and Prospects”, npj Computational Materials, Vol. 54
Gan Y, Liu ZD, Wang GD, et al. (2006) On-line application of structure and property prediction system on hot rolling line on 2050 HSM at Baosteel Iron and Steel 41: 39–44
Majta J, Kuziak R (1996) Use of the computer simulation to predict mechanical properties of c-mn steel, after thermo-mechanical processing. J Mater Process Technol 60:581–588
Bokota T, Domański T (2009) Modelling and numerical analysis of hardening phenomena of tools steel elements. Arch Metall Mater 54:575–587
Wang L, Mu Z, Guo H (2006) Application of support vector machine in the prediction of mechanical property of steel materials. J Univ Sci Technol Beijing, Mineral, Metallurgy, Material 13:512–515
Al-Ketan GD, Soliman A, AlQubaisi AM et al. (2018) Nature inspired lightweight cellular co-continuous composites with architected periodic gyroidal structures. Adv Eng Mater Vol. 20
Brahme A, Winning M, Raabe D (2009) Prediction of cold rolling texture of steel using an artificial neural network. Comput Mater Sci 46:800–804
Simecek P, Hajduk D (2007) Prediction of mechanical properties of hot rolled steel products. J Achieve Mater Manuf Eng 20:395–398
Xu Z, Liu X, Zhang K (2019) Mechanical properties prediction for hot rolled alloy steel using convolutional neural network. IEEE Access 7:47068–47078
Waibel A, Hanazawa T, Hinton G et al (1989) Phoneme recognition using time-delay neural networks. IEEE Trans Acoust Speech Signal Process 37:328–339
Pomerleau DA (1989) Alvinn: An Autonomous land vehicle in a neural network. Technical Report, DTIC Document
Tesauro G (1992) Practical issues in temporal difference learning. Springer, Berlin
Camastra F, Vinciarelli A (2015) Machine learning for audio, image and video analysis: Theory and applications. Springer, London
Naik DL, Sajid HU, Kiran R (2019) Texture-based metallurgical phase identification in structural steels. A Supervised Mach Learn Approach Metals 9:546
Mitchell TM (1997) Machine learning. McGraw Hill, Burr Ridge, p 45
Domingos P (2012) A few useful things to know about machine learning. Commun ACM 55:78–87
Rajan K (2005) Materials informatics. Mater Today 8:38–45
Calaprice A (2010) The ultimate quotable Einstein. Princeton University Press, Princeton
Mitchell TM (1997) Machine Learning. McGraw Hill, Vol, Burr Ridge, p 45
Y. S. Abu-Mostafa, M. Magdon-Ismail, and H.-T Lin, (2012), “Learning from Data”, AML Book
McCallum A, Nigam K (2003) A Comparison of event models for Naïve Bayes text classification. J Mach Learn Res 3:1265–1287
Rish I, Hellerstein J, Thathachar J (2001) An analysis of data characteristics that affect NaïveBayes performance. IBM T.J. Watson Research Center 30Saw Mill River Road, Hawthorne, NY 10532, USA
Domingos P, Pazzani M (1997) On the optimalityof the simple Bayesian classifier under zero-one loss. Mach Learn 29(2–3):103–130
Vapnik V (1995) The nature of statistical learning theory. Springer, Berlin
Burges C (1998) A tutorial on support vector machines for pattern recognition. Data Min Knowl Disc 2:1–47
Cristianini N, Shawe-Taylor J (2000) An Introduction to support vector machines and other kernel-based learning methods. Cambridge University Press, Cambridge
Kotsiantis SB (2007) Supervised machine learning: a review of classification techniques. Informatica 31:249–268
Bailey T, Jain A (1978) A note on distance-weighted k-nearest neighbor rules. IEEE Trans Syst Man Cybern 8:311–313
Baoli L, Shiwen Y, Qin L (2003) An improved k-nearest neighbor algorithm for text categorization. ArXiv Computer Science e-prints
Wang H(2002), Nearest neighbors without k: a classification formalism based on probability. Technical Report, Faculty of Informatics, University of Ulster, N. Ireland, UK
Aggarwal CC (2014) Data classification: algorithms and applications. CRC Press, New York
Mitchell TM (1997) Machine learning. McGraw-Hill Inc, New York, p 432
Murthy (1998) Automatic Construction of Decision Trees from Data: a Multi-Disciplinary Survey. Data Min Knowl Disc 2:345–389
Utgoff P, Berkman N, Clouse J (1997) Decision tree induction based on efficient tree restructuring. Mach Learn 29:5–44
Breiman L (2001) Random forests. Mach Learn 45:5–32
Nakahara H, Jinguji A, Fujii T et al (2016) An acceleration of a random forest classification using altera SDK for OpenCL. In: International conference on field-programmable technology (FPT), pp. 289–292
Nakahara H, Jinguji A, Sato S, et al. (2017) A random forest using a multi-valued decision diagram on an FPGA. In: IEEE 47th international symposium multiple-valued logic, pp. 266–271
Lecun Y, Bottou L, Bengio Y et al (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324
Decost BL et al (2019) Ultrahigh carbon steel micrographs. Springer, Berlin
Fukushima K, Neocognitron A (1980) Self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. In: Biological cybernetics
Y. Lecun, L. Bottou, Y. Bengio, et al., (1998), “Gradient-based Learning Applied to Document Recognition”, Proceedings of the IEEE
A. Madani, M. Moradi, A. Karargyris, et. al. (2018), “Chest x-ray generation and data augmentation for cardiovascular abnormality classification”, Proc. SPIE 10574, Medical Imaging
S. Gurumurthy, R. K. Sarvadevabhatla and R. V. Babu, (2017), “DeLiGAN: Generative Adversarial Networks for Diverse and Limited Data,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, pp. 4941-4949
X. Zhu, “Semi-Supervised Learning Literature Survey”, Computer Sciences TR 1530 University of Wisconsin, Madison, 2006
Caruana R (1995) Learning many related tasks at the same time with back propagation. MIT Press, Cambridge, pp 657–664
Bengio Y (2012) Deep learning of representations for unsupervised and transfer learning. ICML Unsupervised and Transfer Learning 27:17–36
Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22:1345–1359
Deng J, Dong W, Socher R et al. (2009) ImageNet: a large-scale hierarchical image database. In: CVPR09
Krizhevsky A, Sutskeverand I, Hinton GE (2012), ImageNet classification with deep convolutional neural networks. In: NIPS, pp. 1106-1114
Karen S, Zisserman A (2014) Deep Convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556
K. He, X. Zhang, S. Ren, et al., (2016), “Deep Residual Learning for Image Recognition”, In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778
Yu Y, Lin H, Yu Q et al (2015) Modality classification for medical images using multiple deep convolutional neural networks. J Comput Inf Syst 11:5403–5413
Grande J (2012) Principals of image analysis. Metallogr Microstruct Anal 1:227–243
Rafael C, Gonzalez E, Woods R (2001) Digital image processing, vol 2. Pearson Education, London
B. Chanda, D. D. Majumder, “Digital Image Processing and Analysis‖ Second Edition”, PHI Learning Private Limited
Mitra SK, Sicuranza GL (2001) Nonlinear image processing. Academic Press, Cambridge
Manj´on JV, Carbonell-Caballero J, Lull JJ et al (2008) Mridenoising using non-local means. Med Image Anal 12:514–523
Buades A, Coll et al (2011) Non-local means denoising. Image Process Line 1:208–212
Zhang W, Zhao Y, Breckon TP et al (2017) Noise robust image edge detection based upon the automatic anisotropic gaussian kernels. Pattern Recogn 63:193–205
Ma X, Liu S, Hu S et al (2017) Sar image edge detection via sparse representation. Soft Comput 22:1–9
Yedjour H, Meftah B, L´ezoray O et al. (2017) Edge detection based on Hodgkin–Huxley neuron model simulation. Cognitive Process pp. 1-9
Verma OP, Parihar AS (2017) An Optimal fuzzy system for edge eetection in color images using bacterial foraging algorithm. IEEE Trans Fuzzy Syst 25:114–127
Spirintseva OV (2016) The multifractal analysis approach for photogrammetric image edge detection. Int J Image Graphics Signal Process (IJIGSP) 8:1
Mohamed S, Mahmoud T, Ibrahim M (2017) Efficient edge detection technique based on hidden Markov model using Canny operator. Threshold 6
Saglam A, Baykan NA (2017) Sequential image segmentation based on minimum spanning tree representation. Pattern Recogn Lett 87:155–162
Choy SK, Lam SY, Yu KW et al (2017) fuzzy model-based clustering and its application in image segmentation. Pattern Recogn 68:141–157
Zhao Q-h, Li X-l, Li Y et al (2017) A fuzzy clustering image segmentation algorithm based on hidden markov random field models and Voronoi tessellation. Pattern Recogn Lett 85:49–55
Pal NR, Pal SK (1993) A review on image segmentation techniques. Pattern Recogn 26:1277–1294
Kanopoulos N, Vasanthavada N, Baker RL (1988) Design of an image edge detection filter using the sobel operator. IEEE J Solid-State Circuits 23(2):358–367
Chen YY, Duval T, Hong UT et al (2007) Corrosion properties of a novel bulk Cu0.5NiAlCoCrFeSi glassy alloy in 288o C high-purity water. J Mater Lett 61:2692–2696
Yang TH, Huang RT, Wu CA et al (2009) Effect of annealing on atomic ordering of amorphous ZrTaTiNbSi alloy. J Appl Phys Lett 95:241
Tang MB, Zhao DQ, Pan MX (2004) Binary Cu-Zr bulk metallic glasses. J Chin Phys Lett 21:901–903
Reineke EG, Inal OT (1983) Crystallization behavior of amorphous Ni50Nb50 on continuous heating. J Mater Sci Eng 57:223–231
Takeuchi A, Chen N, Wada T et al (2011) Pd20Pt20Cu20Ni20P20 high-entropy alloy as a bulk metallic glass in the centimetre. J Intermet 19:1546–1554
Gao XQ, Zhao K, Ke HB et al (2011) High mixing entropy bulk metallic glasses. J Non-Cryst Solids 357:3557–3560
Lai CH, Lin SJ, Yeh JW et al (2006) Effect of substrate bias on the structure and properties of multi-element (AlCrTaTiZr)N coatings. J Phys D Appl Phys 39:4628–4633
Plummer JD, Cunliffe AJ, Figueroa AI, et al. (2011) Glass formation in a high entropy alloy. In: Presentation at the 8th international conference on bulk metallic glasses. Hong Kong
Hsieh PJ, Lo YC, Wang CT et al (2007) Cyclic transformation between nanocrystalline and amorphous phases in Zr based intermetallic alloys during ARB. J Intermet 15:644–651
Hu CJ, Wu HM, Chen TY (2009) Synthesis of Mg-Cu-Ti based amorphous alloys by mechanical alloying technique. J Phys: Conf Ser 144:012–020
Aydinbeyli N, Celik ON, Gasan H et al (2006) Effect of the heating rate on crystallization behavior of mechanically alloyed Mg50Ni50 amorphous alloy. J Int J Hydrog Energy 31:2266–2273
Ma LQ, Wang LM, Zhang T et al (2002) Bulk glass formation of Ti − Zr − Hf − Cu − M (M = Fe Co, Ni) alloys. Journal of Materials Transactions 43:277–280
Yeh JW (2006) Recent progress in high-entropy alloys. J Annales De Chimie-Science Des Materiaux 31:633–648
Chang HW, Huang PK, Davison A et al (2008) Nitride films deposited from an equimolar Al − Cr − Mo − Si − Ti alloy target by reactive direct current magnetron sputtering. J Thin Solid Films 516:6402–6408
Cheng KH, Lai CH, Lin SJ et al (2011) Structural and mechanical properties of multi-element (AlCrMoTaTiZr)Nx coatings by reactive magnetron sputtering. J Thin Solid Films 519:3185–3190
Tsai MH, Yeh JW, Gan JY (2008) Diffusion barrier properties of AlMoNbSiTaTiVZr high-entropy alloy layer between copper and silicon. J Thin Solid Films 516:5527–5530
Zhang H, Pan Y, He YZ et al (2011) Microstructure and properties of 6FeNiCoSiCrAlTi high-entropy alloy coating prepared by laser cladding. J Appl Surface Sci 257:2259–2263
Senkov ON, Wilks GB, Miracle DB et al (2010) Refractory high-entropy alloys. J Intermet 18:1758–1765
Tong CJ, Chen YL, Chen SK et al (2005) Microstructure characterization of AlxCoCrCuFeNi high-entropy alloy system with multiprincipal elements. J Metall Mater Trans A 36:881–893
Guo S, Ng C, Lu J et al (2011) Effect of valence electron concentration on stability of fcc or bcc phase in high entropy alloys. J Appl Phys 109:103505
Tung CC, Yeh JW, Shun TT et al (2007) On the elemental effect of AlCoCrCuFeNi high-entropy alloy system. Mater Lett 61:1–5
Ke GY, Chen SK, Hsu T et al (2006) FCC and BCC equivalents in as-cast solid solutions of AlxCoyCrzCu0.5FevNiw high-entropy alloys. J Annales De Chimie-Science Des Materiaux 31:669–683
Chen HY, Tsai CW, Tung CC et al (2006) Effect of the substitution of Co by Mn in Al − Cr − Cu − Fe − Co − Ni high-entropy alloys. J Annales De Chimie-Science Des Materiaux 31:685–698
Cantor B, Chang ITH, Knight P et al (2004) Microstructural development in equiatomic multicomponent alloys. J Mater Sci Eng A 375–377:213–218
Yeh JW, Chang SY, Hong YD et al (2007) Anomalous decrease in X-ray diffraction intensities of Cu − Ni − Al − Co − Cr − Fe − Si alloy systems with multi-principal elements. J Mater Chem Phys 103:41–46
Chiang CW (2004) Microstructure and properties of as-cast 10-component nanostructured AlCoCrCuFeMoNiTiVZr high-entropy alloy. National Tsinghua University, Taiwan
Plummer J D, Cunliffe A J, Figueroa A I, et al. (2011) “Glass formation in a high entropy alloy”, Presentation at the 8th International Conference on Bulk Metallic Glasses. Hong Kong
Zhou YJ, Zhang Y, Wang YL et al (2007) Microstructure and compressive properties of multicomponent Alx(TiVCrMnFeCo- NiCu)100 − x high-entropy alloys. J Mater Sci Eng A 454−455:260–265
Zhang Y, Zhou YJ, Lin JP et al (2008) Solid-solution phase formation rules for multi-component alloys. Adv Eng Mater 10:534–538
Wang XF, Zhang Y, Qiao Y et al (2007) Novel microstructure and properties of multicomponent CoCrCuFeNiTix alloys. Intermetallics 15:357–362
Chen MR, Lin SJ, Yeh JW (2006) Microstructure and properties of Al0.5CoCrCuFeNiTix (x = 0−2.0) high-entropy alloys. Mater Trans 47:1395–1401
Chen MR, Lin SJ, Yeh JW (2006) Effect of vanadium addition on the microstructure, hardness, and wear resistance of Al0.5CoCrCuFeNi high-entropy alloy. J Metall Mater Trans A 37:1363–1369
Yang JY, Zhou YJ, Zhang Y (2007) Solid solution formation criteria in the multi-component alloys with high entropy of mixing. J Chin Mater Sci Technol Equip 5:61–63
Zhou YJ, Zhang Y, Wang YL et al (2007) Solid solution alloys of AlCoCrFeNiTix with excellent room-temperature mechanical properties. J Appl Phys Lett 90:181904
Li Y, Poon SJ, Shiflet GJ (2007) Formation of bulk metallic glasses and their composites. J MRS Bull 32:624–628
Acknowledgements
The author is grateful to Prof. Partha Pratim Chattopadhyay, Professor, Department of Metallurgy and Materials Engineering, Indian Institute of Engineering Science and Technology, Shibpur, and Dr. Snehanshu Pal, Assistant Professor, National Institute of Technology, Rourkela, for their valuable guidance to complete the project.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Choudhury, A. The Role of Machine Learning Algorithms in Materials Science: A State of Art Review on Industry 4.0. Arch Computat Methods Eng 28, 3361–3381 (2021). https://doi.org/10.1007/s11831-020-09503-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11831-020-09503-4