1 Introduction

One of the most recycled materials is the steel. But their recycling requires a process to refine it and to remove impurities as slag, this action requires additives to enhance the steel properties, and the addition of these elements requires precise temperatures to avoid the loss of heat and the loss of additives caused by high temperatures that produce oxidation [1]. To produce high-strength low-alloy (HSLA) steel, a complementary process called secondary metallurgy is used. There are added some elements and chemicals added to produce the alloy. This kind of process requires temperatures above 1600 °C, and some low alloys require more than 1700 °C that cannot be measured by the common thermocouples such as the k type. The addition of elements necessary to refine the steel produces a temperature reduction in the liquid bath [2].

Several problems motivated this research. First, the temperature cannot be measured online because the oven is sealed when it is in use to protect the devices and people from the temperatures and the electric arch used to produce heat. Second, sensors cannot be used in temperatures above 1600 °C because they melt and are damaged by the electric arch, and the temperature cannot be taken [3]. Third, the temperature requires being taken more than once to avoid erroneous values and validate the measurement [4]. Fourth, the thermocouples produce an error in measurement near to 1% in their specifications [5] and these thermocouples are used near to the walls of the oven and not in the center where the temperatures are high, the supports of the thermocouples produce heat dissipation [6], and finally, the thermocouples require a protective cap to prevent damage [5].

2 Related work

2.1 Type-2 neural networks

The type-2 or interval type-2 neural networks is a relatively new technology due to it only being 10 years old, and the first apparition of a class of artificial neural network (ANN) called fuzzy neural network similar to the pure or canonical form of radial basis function neural network appear in 2008 in [7], but this network presents some complications due to the constraints of the network’s own artificial neurons which can only operate with a single variable is to say is required the decomposition of the problem onto smaller problems that finally in a summation gives the overall answer, which means that the classic fuzzy rules or Zadeh rules (1) mentioned by [8] cannot operate with this network. Another problem with the first type-2 network is the operation of these ANN with Takagi–Sugeno-Kang (TSK) rules (2), is to say in a form of a coefficients for every variable with equivalence to a function, in there the antecedents are equal to the Zadeh rules and a fuzzy rule cannot be operated by a single neuron and automatically assemble a fuzzy logic system represented graphically by an ANN instead of an ANN with treat the inputs with a single neuron for every variable,

$${R}_{n}: \text{IF} {x}_{1} \text{is} {F}_{1} \text{and} {x}_{2} \text{is} {F}_{2} \text{and}\dots \text{and} {x}_{n} \text{is} {F}_{n}, \text{then} y \text{is} {G}^{1}$$
(1)

where \({x}_{n}\) are the input variables, \({F}_{n}\) are the value of the \(n\) variable, and \(y\) is the output with their value \({G}^{1}\).

$${R}_{{n}_{TSK}}: \text{IF} x(k) \text{is} {F}_{1} \text{and} x(k-1) \text{is} {F}_{2} \text{and}\dots \text{and} x(k-p+1) \text{is}, \text{then} {u}_{i}={c}_{1}^{l}x\left(k\right)+{c}_{2}^{l}x\left(k-1\right)+\dots +{c}_{p}^{l}x\left(k-p+1\right)$$
(2)

where \(x(k)\) and its time-delayed versions are the states of the dynamical system under control, and the consequent is the control \({u}_{i}\) that here is a linear combination of the p states.

A recent study [9] divides the type-2 ANN in the following forms: first in feedforward and recurrent form, Mamdani and TSK for modeling consequents outputs and finally on interval and general for the processing after output to conform the different categories. But does not exist a consensus in the categorization of the networks can be say that all can be called type-2 multilayer perceptron’s or type-2 fuzzy neural networks or IT-2 RBFNN because their basic architecture corresponds to these classes of networks. On other hand, the term IT-2 RBFNN was produced in 2015 with the work of [10] that presents a simulated application for plant classification and E. coli bacteria classification also, presents test for mechanical properties prediction, later [11] presents a comparison of a data driven fuzzy models (DDFM) and IT-2 RBFNN in a real manufacturing process dedicated to classify the rail production, and [12] presents a self-evolving recurrent T-2 ANN that treat the weights as a fuzzy number is to say as fuzzy sets that converts this ANN onto a general type-2 ANN used to dynamical system identification using as examples a bounded-input, bounded-output nonlinear plat, second order nonlinear time-varying plant to test their proposal. In [13] is proposed a robust fuzzy controller based on a type-2 RBFN with Takagi–Sugeno (3) consequents applied to an electrohydraulic actuator, [14] as a resume [11,12,13,14] show theoretical proposals to model RBFNNs. The applications of this class of ANNs are limited and present some challenges and limitations, [12] presents a network restricted to using only one type of membership function (MF), but it is recurrent to adapt the system. A network that transforms and uses the Mamdani and later is transformed onto the Takagi–Sugeno-Kang (TSK) model is presented in [13], this happens because as mentioned in previous paragraphs the application of the fuzzy rules cannot applied to an ANN, and the only form to apply the antecedents of a fuzzy rule is the treatment variable per variable; beta basis functions to define to delimit the spread of the uncertain inputs an used in [14] is applied the IT-2 RBFNN TSK to predict time series and to manage the uncertainties in the following examples: free noise Mackey glass chaotic time-series, noisy chaotic time-series prediction, ECG heart-rate time series monitoring. In [15], the IT-2 RBFNN is used for quality control via image processing; these models of type-2 are used to avoid the problems generated by the acquisition of the data, e.g., in the acquisition of images as is mentioned in [16] in there exists several problems as the shape of the lens, position of the camera, problems exposition, acquisition and transmission, among others.

$${R}_{n}: \text{IF} {x}_{1} \text{is} {F}_{1} \text{and} {x}_{2} \text{is} {F}_{2} \text{and}\dots \text{and} {x}_{n} \text{is} {F}_{n}, \text{then} y=f(x)$$
(3)

A modified neuron is incorporated in the IT-2 RBFNN to eliminate the type reduction in [17, 18] use the IT-2 RBFNN to classify and recognize alphabets for dictation word and the recognition and response in the brain to a visual stimulus with the representation of a vowel at two level intrapersonal for himself and interpersonal for the group, and the second phase is in where the IT-2 RBFNN is used to process the noise of variations in the sounds and images. In [19], a type-2 ANN model to predict time series and manage the uncertainties is developed using an evolving recurrent interval type-2 intuitionistic fuzzy neural network (eRIT2IFNN); in [20] a dynamic SVD dynamic fractional-order deep learned type-2 fuzzy logic system (FDT2-FLS) is used to predict the solution of hyperchaotic system with adaptation rules of the consequent parameters are extracted such that the globally Mittag–Leffler stability is achieved. To test the model are used online prediction of chaotic time series and online prediction of glucose level in type-1 diabetes patients with only one epoch of training. In [21], it is presented a multilayer interval type-2 fuzzy extreme learning machine (ML-IT2-FELM) for the recognition of walking activities and gait events this kind of network is an equivalent to IT-2 fuzzy logic system without defuzzification. It uses three different kinds of walking: level-ground walking, ramp ascent, and ramp descent and finally the IT-2 RBFNN is used to recognize and classify the type of walking with precisions over 99.5% but require learning. An enhanced version of [21] model is presented in [22] with a general type-2 RBFNN that provide an additional phase to process the uncertainty, and this is a theoretical proposal tested in three cases (benchmark data sets for multiclass classification and regression, nonlinear plant identification, and noisy chaotic time-series prediction).

Classification process is made by type-2 process in a fuzzy c-means (FCM) classifier with the classifier embedded in the hidden or middle layer of the ANN, this study is only for comparison the performance of the proposed method by the authors versus type-1 FCM in a theoretical proposition, in [23]; in [24] the It-2 RBFNN is used to forecast with the novelty of the proposal is the use of ellipsoidal MFs (Fig. 1a) and not a classic Gaussian MF (Fig. 1b). The ellipsoidal MFs provide the chance to get bigger intervals due their shape, but this kind of MF’s non-uncertain (certain) region at the center of the interval is reduced because the shape of the interval that is an ellipsoid and grows up the uncertain region as show in (Fig. 1c) that presents a comparison between the Gaussian and ellipsoidal MF’s certain regions, Fig. 2 presents the forms of “S” shape of the classic Gaussian interval (2a), “ovoid” for the ellipsoidal interval (2b), or trapezoid in the triangular functions (2c) and the use of non-symmetrical functions in the interval is not adhered to the canonical form created by two type-1 fuzzy sets that need to be symmetrical; the form of the set needed is shown in Fig. 3 to assemble an interval type-2, low or left (Fig. 3a), right or high (Fig. 3b) and ellipsoidal interval type-2 (Fig. 3c), the justification for it is the presence of noise generated by an external disturbance, but exists the limitation of only can be used with interval type-2 systems with uncertain standard deviation and not with uncertain means that generates another limitation or challenge for the modeling of the system. In [25] is proposed a learning method based on backpropagation to adjust the weights of the neurons and with this reduce the computational times required for the evaluation and forecast. Prospection was made by IT-2 RBFNN in [26] the application of the neural network to control the dynamic uncertain trajectory of a directional drilling process with a dynamical environmental change, in [27] is proposed a type-2 network to generate a generalized predictive control for a catalytic reaction, controlling the ammonia (NH3) in the process of decomposition of nitrous oxide (NOx) with time delay and uncertainties is proposed a predictive control due the classical methods as proportional-integral-derivative (PID) cannot made this work. A sliding mode control is presented in [28] using an IT2 RBFNN with the advantage of adaptation by learning by a backstepping method that generates this own information and adjust it to get a desired performance.

Fig. 1
figure 1

Comparison between Gaussian and ellipsoidal membership functions and their respective certain and non-certain regions. a Gaussian MF, b ellipsoidal MF, and c comparison of certain regions

Fig. 2
figure 2

Comparison between the form of intervals by different MFs. a Gaussian, b ellipsoidal, and c triangular

Fig. 3
figure 3

Ellipsoidal membership functions. a Low or left and b high or right

2.2 Secondary metallurgy

Birat in [29] defines the secondary metallurgy as the process of cleaning the steel. The secondary metallurgy is characterized to the use of ladle furnaces to refining the steel, in their several processes are made as desulfurization, deoxidation, inclusion control, and clearly alloying, as mention [30], in these steps of production the temperature control is one of the most important tasks to be made to produce ultra-low carbon steel, HSLA, among others due the additives that prevents the oxidation are added and they can be lost by oxidation caused by high temperatures, as is mentioned by [31] and specific temperatures are needed to add every chemical additive, also, with every addition of elements exists loss of heath due the quantity of the element added and this condition causes the elevation of the production cost [2].

2.3 Secondary metallurgy and artificial intelligence

The research in the control process of temperature on secondary metallurgy is limited to 147 papers that covers this topic in the period 2020–today in a Google academic search in July 2024 without more specification. If the search is limited to the addition of “neural” term the papers reducing considerably to 60 in the same period. As relevant and recent papers about the topic figure the following.

Some relevant papers talk about secondary metallurgy and artificial intelligence techniques but in there appears:

A brief review with the models for quality control in ladle furnace processes, but practically all are used to verify and predict the quantity of sulfur in the steel using black box models and mixed models without more explanations about the type of black box method or if it is used to predict temperatures o other control process parameter [32]. An application for temperature prediction based in historical data to provide a genetic algorithm that optimizes a backpropagation neural network used to predict the temperature of the molten steel with an error of ± 5 °C [33].

In [34] is used principal component analysis (PCA) and a deep neural network (DNN) to eliminate the collinearity, dimension, and reduce the complexity of the model reducing their variables; the obtained data of PCA are used to feed the DNN. If the precision is decreased, the error decreases to from errors 54%, 93%, and 98.8% in precisions with ± 1, ± 2, and ± 5%, presented in the steel. In the case of manganese, the error rates increase from ± 1%, ± 2%, and ± 3% are 77.0%, 96.3%, and 99.5%, respectively, multiple linear regression, modified backpropagation, and DNN model. Assembled learning is used (the use of regression and classification mixed models) as predictor of temperature of the molten steel, the classification is made by k-nearest neighbors in [35]. The case of [36] presents a neural network forecaster of temperature for energy saving based in the monitoring of temperature (heat-loss) in the process to avoid the use of extra energy in the cycles of heating and cooling to reduce the energy waste in the process particularly in period between non-casting and casting times. Their proposal presents a mean absolute error in the order of 8.53 °C produced by a linear regression and 4.97 °C produced by the ANN forecaster. Design an estimation model for the temperature in the steel using two approaches to compare their performances, the first one is based in physic principles using a series of equations in a gray box model, and the second one uses an ANN to predict the temperature of the steel. The main problem of this application is the lack of reliability of their ANN if the input data differs from the calibration of the designed tool, while the physic approach presents stable behavior. The mean absolute error (MAE) of the ANN is around 6 °C, while the physics-based application produces a MAE of 14 °C in [37]. The application of [38] presents the use of heuristics to generate an approximation for temperature of the liquid steel. But, as mentioned by the authors, “it can predict the end temperature of LF molten steel relatively accurately. The prediction accuracy of the end temperature of molten steel at ± 10 °C can reach more than 80%.” Then, their MAE is ± 10 °C.

A two-layer transfer learning framework based on a temporal convolution network (TL-TCN) to measure billet temperature and generative adversarial networks (GAN) to predict the temperature in every zone of the ladle is proposed in [39], while [40] uses an optimized kernel extreme learning machine (OKELM) is proposed. Firstly, the optimized kernel extreme learning machine is used to establish the relationship between the furnace temperature and its related factors. Based on this, the continuous human learning optimization (CHLO) algorithm is adopted to optimize the kernel parameter and regularization coefficient, then the best OKELM with optimal parameters is adopted to predict the furnace temperature more precisely and effectively. In [41] is used a DNN with hybrid modeling using isolation forest (IF), zero-phase component analysis whitening (ZCA whitening) to predict the temperature of molten steel but are required multiple techniques to obtain the principal variables of the process (Pearson correlation coefficient, ZCA whitening, IF, and t-distributed stochastic neighbor embedding (t-SNE)) making some complications on the design of the prediction model previous to model the own ANN, in the experiments are used different levels of precision with temperature of 3, 5, and 10 degrees obtaining percentages of 77.9, 92.3, and 99.6 of precision, respectively; in Table 1 the results obtained by [41] are shown.

Table 1 Comparison of models using MAE as precision metric. Adapted from [41]

Are remarkable a couple of conditions in the state of art literature, first the presented ANNs for the specific process of secondary metallurgy are type-1 networks, is to say paraphrasing the words of Jerry M. Mendel [8] this class of systems cannot handle the uncertainties present in the productive system but can manage the uncertainties in measurements by learning, is to say the design phase is critical to obtain a reliable system after training and learning, the systems modeled and presented in literature require more complexities as the obtention of a function that represents the system and this fact is a complication due are needed the necessity of an adequate method and adequate selection of variables to make a precise system these processes needs training. The systems presented using IT-2 RBFNNs are TS [13] or TSK [7, 9, 14], but the study of [9] presents in their survey of ANNs only TS or TSK ANNs that represents that practically all work in this field is dedicated to TS or TSK models except for [15] and this proposal that presents a Mamdani model using IT-2 RBFNN.

3 Materials and methods

3.1 Interval type-2 radial basis function neural network (IT-2 RBFNN)

The interval type-2 systems such as the IT-2 RBFNN (Fig. 4) are an option to manage the uncertainties present in the processes produced by measurements [42]. Their output produces an interval of values that contains the uncertainty, and this is called secondary membership. To obtain a single value, the low and high values in the interval are averaged to produce the final output, called type reduction.

Fig. 4
figure 4

IT-2 topology adapted from [15]

The algorithm of the IT-2 RBFNN could use as basis with the necessary adaptations the type-2 fuzzy rule base as is used in fuzzy logic systems in the form (4) that could be transformed into (5) and required the MF value (6) to obtain a weight value for the next step. With (6), (7), and (8), that produces the low and high values of the interval. The low and high values of membership for every variable are used in (7) for low output and (8) for high output, and the final approximation is obtained by (9), (10), or (11), which can be reinterpreted as a center of gravity defuzzifier in fuzzy models (12).

$$\text{Rule} i: \text{IF} x1 \text{is} {\widetilde{F}}_{1}^{i} \text{and} x2 \text{is} {\widetilde{F}}_{1}^{i} \text{then} {y}_{i}={\widetilde{G}}^{l}$$
(4)

where \({\widetilde{F}}_{1}^{i}=\left\{{m}_{1},{m}_{2}\right\}\) are the values low and high of the x variable, and \({\widetilde{G}}^{l}={y}_{i}=\{{\overline{y}}_{i},{\underline{y}}_{i}\}\) and represents the output of the variable’s combination.

$$\text{if} {X}_{1} \text{is} (\underline{a} , \overline{a}) \text{and} {X}_{2} \text{is} (\underline{b},\overline{b}) \text{THEN} (\underset{\_}{{Y}_{1}},{\overline{Y}}_{1})$$
(5)
$${\mu }_{a}^{ }\left({x}_{i}\right)=\text{exp}\left[-\frac{1}{2}{\left[\frac{{x}_{i}-{\overline{x} }_{i}^{ }}{{\sigma }_{xi}^{ }}\right]}^{2}\right]=R\left({x}_{i}\right)=w({x}_{i})$$
(6)
$$\underset{\_}{{\mu }_{a}^{ }}\left({x}_{i}\right)=\text{exp}\left[-\frac{1}{2}{\left[\frac{\underset{\_}{{x}_{i}}-\underset{\_}{{\overline{x} }_{i}^{ }}}{\underset{\_}{{\sigma }_{xi}^{ }}}\right]}^{2}\right]=\underset{\_}{R({x}_{i})}={\underline{w}(x}_{i})$$
(7)
$$\overline{{\mu }_{a}^{ }}\left({x}_{i}\right)=\text{exp}\left[-\frac{1}{2}{\left[\frac{\overline{{x}_{i}}-\overline{{\overline{x} }_{i}^{ }}}{\overline{{\sigma }_{xi}^{ }}}\right]}^{2}\right]=\overline{R({x}_{i})}=\overline{w}({x}_{i})$$
(8)

where \({\mu }_{a}(x)\) is the membership function value of the variable \({x}_{i}\), \({\overline{x} }_{i}\) is the mean of the fuzzy set, and \({\sigma }_{xi}\) is the dispersion of the fuzzy set.

$$\underset{\_}{d\left(x\right)}=\sum_{i=1}^{H}\underset{\_}{{c}_{i}{w}_{i}} \forall c,R\in x$$
(9)
$$\overline{d\left(x\right)}=\sum_{i=1}^{H}\overline{{c}_{i}{w}_{i}} \forall c,R\in x$$
(10)
$$d\left(x\right)=\frac{ \underset{\_}{d\left(x\right)}+\overline{d\left(x\right)}}{2}$$
(11)

where \({c}_{i}\) is the output in the dataset for the prediction.

$${y}_{c}\left(x\right)=\frac{\sum_{1}^{n}{y}_{i}{\mu }_{B}({y}_{i})}{\sum_{1}^{n}{\mu }_{B}({y}_{i})}$$
(12)

4 Proposal

This method uses the individual base inference [43], which means that every lecture of the process could be used as a rule. This is an alternative to clustering or composed base inference, which is the classic approach for intelligent systems. In this case are selected seven rules as is shown in Table 2. An interval type-2 model is denoted by a type-2 fuzzy represented by a set given by (13) and depicted in Fig. 5.

$$\widetilde{A}=\left\{\left(x,u\right),\left({\mu }_{\widetilde{A}} \left(x,u\right)\right)|\forall x\in X,\forall u\in {J}_{x}\subseteq \left[\text{0,1}\right]\right\}$$
(13)

where \(\widetilde{A}\), contents a type-2 membership function \({\mu }_{\widetilde{A}} \left(x,u\right)\), where \(x\in X\) and \(u\in {J}_{x}\subseteq \left[\text{0,1}\right]\) and \({0\le \mu }_{\widetilde{A}} \left(x,u\right)\le 1\).

Table 2 Rule base for the IT-2 RBFNN
Fig. 5
figure 5

Low and high membership values of interval type-2 fuzzy set

In Fig. 6 in there can be seen \({\overline{\mu }}_{A}({x}_{q})\) that is equal to \(\overline{{f}_{r}^{l}}({x}_{q})\) and \({\underset{\_}{\mu }}_{A}\left({x}_{q}\right)\) that is equivalent to \(\underset{\_}{{f}_{l}^{l}}({x}_{q})\).

Fig. 6
figure 6

Vertical slice, secondary membership, or interval of evaluation of interval type-2 fuzzy logic system

This proposal presents an enhanced and optimized system with a composed base inference based on the model of [31], while [31] is assembled with 49 rules; this proposal only has seven rules. The proposal is optimized with a reduction of 86% in their size to a system with only seven rules to reduce the complexities of design, programming, and computational times that increment the different costs of the system.

In this case, the inference is produced by (9) and (10) for the low and high values of the interval and by (11) to produce an overall forecast or prediction. The rule base is generated by the seven principal points of the Gaussian distribution, the sigma points, and the mean point for the antecedents and consequents.

To evaluate the proposal, 18 data pairs were obtained of the process using two variables: time of the process and kilowatt-hours used to enhance the temperature, and a temperature in °C to evaluate the performance of the proposal. The data is presented in Table 3.

Table 3 Data pairs for test

5 Results and discussion

The forecast of temperature obtained by the monitoring and control system for secondary metallurgy in ladle furnaces produces an average error below the common error produced by thermocouples of the k-type (Fig. 7). The shadowed region shows the error zone region produced by a k-type thermocouple (± 6 °C), and the prediction of the IT-2 RBFNN falls into this zone (Fig. 7). Moreover, as shown in Fig. 8, all forecasters used fall into this zone, but the type-1 (T1) singleton fuzzy logic systems (SFLS) and the interval type-2 (IT2) radial basis function neural network (RBFNN) present better levels of precision in contrast with the type-1 (T1) RBFNN and the IT2 SFLS but no better in comparison with the IT-2 RBFNN.

Fig. 7
figure 7

Results of temperature forecasting of interval type-2 models

Fig. 8
figure 8

Error in °C produced by different forecasters on T1 FLS, IT2 FLS, T1 RBFNN, and IT2 RBFNN

In Fig. 7, the IT-2 RBFNN produces better results than the IT-2 FLS. The mean average error is 2.02 °C for IT-2 RBFNN and 6.48 °C for the IT-2 FLS, which represents an error of 0.126% for IT-2 RBFNN and 0.403% for the IT-2 FLS, also the actual thermocouples of k type present a mean error of 0.4% that is equivalent to the IT-2 prediction as is mentioned in [44]. For both cases (IT-2 RBFNN and IT-2 FLS), the error is less than or equal (IT-2 FLS) to the error produced by a k-type thermocouple without the necessity of specialized instruments. All that is needed is the time of the process and the kilowatt-hours used in the process to enhance the temperature of the ladle furnace. The error grows when the oven is off because the dependent variable is time, and with the growth of time, the temperature grows up when the oven is in operation and produce contradiction in the behavior of the process with limited information, but with this level of temperature, 10 °C or less does not affect the refining process because it represents (0.6%) that is near to the error of specialized instruments.

It is remarkable that compared to the process of the interval with both systems without training, the results produced are better or equal to those of a system of type 1 with training (Fig. 8); Fig. 9 is a different representation of Fig. 8 to better illustrate the mean average error produced by the system that could not be appreciated in Fig. 8. This condition can be generated by the fact that the inferences are produced by the product of the membership values of the variables using the product t-norm. The fact that the proposal does not use the training phase produces an accurate system capable of being used online.

Fig. 9
figure 9

Error in °C produced by different forecasters on T1 SFLS, IT2 FLS, and IT2 RBFNN

In Table 4 and Fig. 6, the obtained results with different forecasters using the mean absolute error (MAE) given by (14) or (15) show a reduction of error using the RBFNNs in type 1 or the interval type-2 RBFNN. In contrast, the interval type-2 fuzzy logic system presents an error in the order of 300% compared with both RBFNNs for this application due to the management of uncertainties and the fact that this process presents cycles of linear and non-linear behavior. As shown in [31], the model does not adapt in this case, as can be seen in Fig. 5. The error rates are shown in Table 4 and are remarkable than the T1 SFLS and IT2 RBFNN produce so similar values of error.

$$MAE=\frac{\sum_{i=1}^{n}\left|{\widehat{y}}_{i}-{y}_{i}\right|}{n}$$
(14)

where \({\widehat{y}}_{i}\) is the desired output and \({y}_{i}\) is the prediction or forecast.

$$MAE=\frac{\sum_{i=1}^{n}\left|{e}_{i}\right|}{n}$$
(15)

where \({e}_{i}\) is the absolute error of prediction or forecast.

Table 4 MAE of different forecasters

In Table 5 are presented the MAE obtained in the different approaches using type-2 ANNs present in literature, in there can be shown that the range of error oscillates between the 0.4 and 0.9% is to say between 5 and 14 °C for a basis value of 1600 °C in the production process. The first column presents the referenced work, second column presents the value of error in °C if it is available, and the third column presents the value in percentage if it is available.

Table 5 MAE produced in approaches of the state of art

6 Conclusions

The management of the uncertainty on the artificial neural networks (ANNs) in the classic form of the type 1 model presents precision rates near to 80%, as is shown in the literature and documented by [45]. This means that an ANN with a precision of 99.87% is an excellent classifier, as is the case with both RBFNNs used. The T1 SFLS, due to their nature, does not manage the uncertainties, and this fact is crucial for this application as a control system that acts as a classifier because this presents high levels of precision near 100%. The T1SFLS presents similar behavior, but in comparison the IT-2 RBFNN system improves the results of their counterpart T1 SFLS in 0.000521%, which means an enhancement in the results.

The devices commonly used in these processes and the limitations of the process itself mean that the temperature cannot be monitored all the time, which is why there is a need to have an intelligent expert system that works as a forecaster of the existing temperature in the ladle furnace to add the chemical elements in the right way at the right time and the right amount, avoiding their loss due to oxidation and thus achieving high-strength low-alloy steels.

The system created with a small rule base allows work to be done online and keeps operators safe since the process of handling high temperatures and an electric arc is very risky for the personnel who work inside it. On the other hand, since the sensors or devices are limited, this system can operate at very high temperatures, which these commercially available devices are not able to handle adequately.