Introduction

There is a recent trend, derived from the search for specific, single analyte sensors, to use sensor systems. This is due to the existence of few almost perfect sensors which do not suffer from interferences, calibration drifts, noise effects or irreproducibilities. A first strategy was the use of automated analytical systems to adapt or solve some of these non-idealities. Certain introduced stages, such as sample conditioning, frequent recalibration, sample clean up or preconcentration, enhanced analytical performance but with higher cost and complexity. The more novel scheme is the use of multielement sensor arrays, which can use sensing devices with less restrictive characteristics. With this approach, the solution of situations or problems hard to cope with more classical alternatives, such as the classification of foods and beverages, can be attempted.

The use of a set of sensors instead of a single, perfectly selective sensor brings added advantages. Instead of obtaining only a single data point, sensor arrays provide multiple data points per sample, following the trend of obtaining information with higher dimensionality. This richer content can provide additional chemical information, which in turn is used to differentiate multiple analytes and to discriminate interfering species. Information content can sometimes be further enhanced using modulation schemes, obtaining the so-called higher order sensor systems.

The transducing approaches for sensor arrays cover almost all the different sensing principles. Among others, there have been described arrays of mass sensors, chemoresistors, optodes, voltammetric or potentiometric sensors. When the array is used to detect gaseous species, it is given the name "electronic nose" [1]. When it is used for liquid samples, the term "electronic tongue" starts to be used [2]. Both terms have been proposed in reference to animal senses. This is because these strategies share the general principle of having only few classes of differentiated receptors but, with cross term responses and overlapping selectivities, they discriminate against a multitude of species [3].

In this higher dimension, overlapping measurements have to be processed with adequate tools in order to extract the information required. The advantages of a simpler sensing system, almost without any accessory equipment or maintenance, counterbalances more complex mathematical modelling of the responses, and the increased processing power required to run recognition algorithms on suitable platforms. Hopefully, combination of the multiple responses of a sensor array with advanced signal processing techniques has been addressed as one of the better ways of improving sensor performance, better than miniaturization of hardware [4]. This coupling can provide, together with the goal information (i.e. the analyte concentration), accessory information (presence of interferences) or diagnostic data such as sensor malfunction or process disruption.

Considering potentiometric sensors, the aforementioned approach has been historically addressed by application of different chemometric tools. The first attempt was provided by Otto and Thomas [5], who employed an eight-sensor array and their Nicolsky–Eisenmann response model fitted by multiple linear regression. Beebe and Kowalski repeated this approach, in this case employing a five-sensor array with non-linear regression methods (Simplex) [6] or projection pursuit regression [7], a multivariate chemometrics technique. A further approach was that of Forster et al. who employed a four-electrode array and again Simplex non-linear regression [8].

A convenient approach is the use of artificial neural networks (ANNs), as demonstrated by Bos and Van der Linden in their seminal work [9]. There they were first to demonstrate the use of ion-selective electrodes (ISEs) modelled with ANNs. These algorithms, which generate "black-box" models, have shown special abilities to describe non-linear responses obtained with sensors of different families. First, these tools create models from a large amount of departing information, the training set, which must be carefully obtained [10]. This extra information is because of the absence of a thermodynamic or physical model, in this case, the Nicolsky–Eisenmann equation.

Besides, two further considerations are mandatory in order to obtain the appropriate model. These are the network topology, the parameters defining its structure, and the training strategy, or the way used by the network to learn from the initial information. The correct topology defines an optimum number of hidden layers, the number of neurons in each layer and the transference function. Usually, networks with three layers of neurons are used: an input layer, an unique hidden layer, and the output layer. The number of neurons in the input layer equals the number of sensors in the array, and the number of neurons in the output layers equals the number of information channels needed, normally the number of chemical species determined. To optimise the network, the adjusted factor is then the number of neurons in the intermediate layer, normally selected by trial and error.

Together with the topology of the ANN, another point is the selection of the training strategy to yield convergence with a proper modelling ability and a demanded level of accuracy and precision for a given data set. In fact, two independent data set are used at least, a training set and an (external) test set [11]. Each set contains two kinds of interrelated information. The former includes the responses of the sensor array (patterns), and the latter corresponds to the sought information (targets), in our case the concentration values of the analytes. This training set must be large enough and contain sufficient variability to yield a proper modelling of the response. In order to ensure the quality of the final results, some external check is used. For this purpose, the second set of data, the test set, is needed. This external check consists in verification of the success in representing the values of the test set. Occasionally, some configurations of ANN present overfitting, which is detected by a high degree of fit for the training set, but with the test set showing excessive deviation from expected values [12, 13]. This condition can be checked along the training process using a third data set, the validation set, which ends the learning process. In the present work one of our goals has been to explore ANN configurations with adequate topologies and training strategy for correct modelling of the responses of the sensor array with minimum overfit.

The coupling of ISE arrays with ANNs is a relatively new trend with few antecedents, specially when restricted to the quantification of species. Apart from the first proposal [9], only few groups, like that of Vlasov [14] or the collaborations of Massart and Fabry [15] have made significant contributions to this emerging topic. All these contributions employ feedforward networks furnished with back-propagation training algorithm.

Sensor arrays have shown a flexible design approach for development of measuring systems that can be adapted to different applications. Instead of pursuing perfectly ideal, specific sensors or costly sensor systems, the alternative can be the use of sets of several simpler, less-ideal sensors but supporting them with the use of artificial intelligence [16]. Among the cases described are special applications with arrays of potentiometric sensors and chemometric tools different from ANNs. As an example, there is the work of Toko, with an electronic tongue, employing PCA, adapted to qualification and classification of different beverages, for example beer, or the determination of water quality [17]. There are also the contributions of Vlasov, like the monitoring of a plant of bottled mineral water [18], or the classification of clinical or food samples [19]. Few applications have shown the quantification ability of the proposed approach. Some of these are the simultaneous determination of species in mineral water or wine [20], soft drinks and beers [21].

In this contribution, a sensor array of potentiometric sensors, or electronic tongue, is devised for determination of ammonium and potassium ions by direct measurement, with minimal sample pretreatment. In a classical approach this determination can be stated as non-practical, due to the known interference of common alkaline ions with the response of the ammonium sensors. With the electronic tongue approach an electrode array of eight potentiometric sensors, including an ammonium sensor, different elements sensitive to alkaline ions and non-selective or generic elements, is trained using ANNs for the determination of ammonium and potassium in water. Different mathematical pretreatment strategies and neural network procedures have been tested, and a better performance algorithm has been proposed for the final model.

Experimental

Chemicals

The ion-selective poly(vinyl chloride) (PVC) membranes were prepared with high molecular weight PVC (Fluka). Plasticizers used were bis(1-butylpentyl) adipate (BPA), dioctyl sebacate (DOS), and dibutyl sebacate (DBS), all obtained from Fluka. Ionophores employed in the formulation of the potentiometric membranes were from the same family of ionophoric antibiotics, nonactin (ammonium ionophore I, from Fluka), valinomycin (potassium ionophore III, from Fluka), monensin (sodium salt, from Acros), and lasalocide (Fluka). A classical crown ether, dibenzo[18]crown-6 was also used as neutral carrier.

Materials used for the preparation of the inner solid contact were the epoxy resin components Araldite M and HR hardener (both from Ciba-Geigy) and graphite powder (100 µm, BDH) as the conducting filler.

Imidazole (Fluka), and the salts NH4Cl and KCl (both reagent grade, from Merck) were used as background electrolyte and calibration species, respectively. All solutions were prepared with deionized and highly purified water (16–18 MΩ resistivity, Milli-Q, Millipore).

Apparatus

Potentiometric measurements were performed with the aid of a laboratory-constructed data-acquisition system. It consisted of eight input channels implemented with following circuits employing operational amplifiers (TL071, Texas Instruments) which adapt the impedances for each sensor. Measurements were unipolar, with the reference electrode connected to ground. Each channel was noise-shielded with its signal guard. The outputs of each amplifier were filtered using a passive low-pass filter (centred at 2 Hz frequency) and connected to an A/D conversion card (Advantech PC-Lab 813, Taiwan) installed into a 486 Personal computer running at 66 MHz. The readings were done employing custom designed software programmed with QuickBASIC 4.5 (Microsoft).

Sensor array

The sensors used in our system were PVC-membrane all-solid-state ion-selective electrodes (ISEs), which employ an internal ohmic contact made from a conductive composite. Figure 1 depicts schematically the construction procedure of one of these sensors. These are the habitual configuration used in our laboratories [22, 23]. The sensor was formed by filling a plastic cylinder (8 mm i.d.) having an electrical contact with an homogeneous mixture of 35.7% Araldite M, 14.3% HR hardener, and 50% graphite powder, and cured for 6 h at 50 °C. Next, a 0.5 mm depth cavity is formed on the top of the constructed body. Finally, this is filled with a PVC potentiometric membrane cocktail and let dry. As outlined in Fig. 1, membranes are formed by solvent casting of a mixture further diluted with tetrahydrofuran (1 mL per each 20 mg PVC). Once formed, membranes were conditioned in a 0.1 mol L−1 solution of its primary ion for 24 h. Table 1 summarises the formulation of the different membranes, using ionophoric antibiotics or the lipophylic crown-ether [24, 25, 26, 27, 28, 29] as the electroactive components. The sensor array comprised duplicated sensors for ammonium, potassium, and sodium, plus two generic membrane formulations for alkali ions, one employing the crown ether and the other the antibiotic lasalocide.

Fig. 1.
figure 1

Schematic diagram of the fabrication process of the PVC-membrane all-solid-state potentiometric sensors, based on the graphite-epoxy composite

Table 1. Formulation of the ion selective membranes employed in the construction of the potentiometric sensor array

No particular studies of limits of detection (LD) of the sensor array were performed. This was due to the complexity of the case where different ions can be simultaneously present. From individual calibrations and the observation of raw data measurements in mixed solutions estimated values for NH4 + and K+ ions were 0.002 and 0.001 mmol L−1 respectively, determined according to IUPAC [30].

Calibration and measurement procedure

Measurements were done with solutions with a defined background and pH, 0.010 mol L−1 imidazole buffer of pH 6.60, aimed at improving the characteristics of response for a low level detection of ammonium ion. Preliminary studies with this sensor array and the considered ions, performed with 0.01 mol L−1 imidazole, 0.01 mol L−1 TRIS, and water as background media showed the first to be the best choice, specially because of the slightly improved response to ammonium at low concentration levels. As reference electrode, a double junction Ag/AgCl electrode (Orion 90-02-00) with the imidazole background solution in its outer chamber was used.

In order to generate the primary information for the modelling of the system, different mixtures were sequentially prepared by accumulated additions of several standard solutions of increasing concentration of the different considered ions. These microvolume additions were performed with the aid of variable-volume micropipettes (Finnpipette, Labsystems). Each of these points, with a definite value for the concentration of ammonium and potassium was acquired and entered in to the neural network. The standard solutions employed were NH4Cl and KCl, alone or combined, with concentration values, of one ion or both, of 10−4 mol L−1, 10−3 mol L−1, 10−2 mol L−1, 10−1 mol L−1 and 1 mol L−1. The most concentrated solution was prepared by direct weighting of the salt, and the rest were prepared by sequential dilution. The total number of points generated as primary information was 174, obtained during 2 work days in the laboratory, and having variable concentrations of the two considered ions between 0.001 and 50 mmol L−1. The measurements were performed once the potential values stabilised, c.a. 30 s after addition of the solutions.

Software

The training and evaluation of the different ANNs tested in this work was done using the software package Matlab 6.0 (Math Works)

For the modelling of the response of the sensor array, three different types of ANN were used, aimed at universal description of non-linear systems with reduced overfitting.

Feedforward backpropagation ANN with Levenberg–Marquardt training algorithm (LM)

This type of ANN is widely used with sensor arrays. It is derived from the algorithm of gradient descent [31], from which it is a clear improvement [32].

Feedforward backpropagation ANN with Bayesian regularization training algorithm (BR)

This variant applies statistical methods to detect neurons causing overfitting, thus being subsequently pruned [33]. With this strategy, the training procedure can be assimilated to the search and inference of the components of the network with a greater probability to form the best model [34].

Generalized regression ANN (GR)

This ANN implements the so-called Nadaraya–Watson kernel regression, as initially proposed by Specht [35]. It presents a network model that achieves an interpolated output function based on a finite number of inputs, applying a smoothing parameter, α, to each input/output pair from the training set. It has the drawback that it employs a large number of neurons in the hidden layer, as many as the number of patterns in the training set. This ANN-GR will be hardly usable for the electronic tongue model, because it is not compatible with the external test assessment. Nevertheless, this network was used as a reference point to compare the performance of the LM and BR networks. As mentioned above, in the learning stage of this ANN-GR, the complete set of experimental points (training+test) was used.

Results and discussion

Topology

A significant problem arises in the selection of the ANN topology, since it is not possible to predict a correct configuration in advance. A trial and error process is needed where the training strategies, the dimension of the hidden layer and the transference functions used in the hidden and output layers are varied in order to find the proper combination. In our case, one of the factors, the transference function at the input layer was fixed to a linear type, as normally recommended in feedforward ANNs [13].

For the selected application, we initially fixed as constant the following parameters for the remaining experiments:

  • eight neurons for the input layer, one for each potentiometric sensor;

  • two neurons for the output layer, one for each concentration value, NH4 + and K+; and

  • considering a highly non-linear behaviour for our response model, a fixed non-linear sigma-shaped transference function was used for the hidden layer, specifically, the tansig function [31].

In order to obtain an initial estimate for the number of neurons in the hidden layer, ten neurons were arbitrarily used, and a light training process was started for five iterations (epochs) only. This pseudo-training process was performed with the purpose of determining the trend shown by the global error of the model, whether it was systematically improved or not.

The global modelling error was calculated in this work as the sum of squared errors, SSE, defined as:

$$ {<italic>SSE</italic> = {\sum\limits_{i = 1}^N {{\left( {x_{i} - a_{i} } \right)}} }^{2} } $$

where N is the number of samples, and x i and a i the expected and obtained concentration values, respectively, from the modelling of the ANN. Next, the number of neurons in the hidden layer was incremented by five, light training was again performed, and this sequence was iterated until we did not obtain any clear improvement in the SSE. Thus, the former number of neurons in the hidden layer was fixed as the base dimension for the following optimisations. From this preliminary base dimension, two further dimensions were examined, explicitly ±50% of the number of neurons in the hidden layer. With respect to the transference function in the output layer, two variants were evaluated in order to check their effect: first, a non-linear function, tansig, and next, a linear function, the satlins type [31].

During the few days employed in the generation of input data, neither appreciable drift for the electrode constant nor for the sensitivity was detected. To check this point, preliminary tests were performed with an extra input related to a time index. This was used to take into account possible drift issues. First studied ANNs showed us that this information was the least significant for the modelling, so it was discarded in the rest of the study.

Training

From the set of generated experimental data, 174 values, approximately 50%, were selected randomly for the training of the ANN; the rest were reserved for its external testing. First, the data set was normalized to a range of [−1,+1], both the input patterns (measured emf) and the output patterns (concentration values). This detail ensures proper operation of the selected transfer functions, which are also normalized.

To initiate training the neural network model formed by the numeric values of the weights interconnecting the neurons in the hidden and output layers was iterated changing these values towards the minimization of the prediction errors, SSE. This numeric procedure was started from a random arrangement of weights, generated using the pre-programmed tools provided in the Matlab environment. The training procedure was stopped when one of two conditions was achieved:

  • a maximum number of iterations (500 epochs) is reached, or

  • the modelling error, SSE, is lower than an arbitrarily preset value of 0.001. This value was chosen as a compromise observed in initial evaluations of the data. First, a greater value would not permit a correct training with the ANNs of the BR type, while a smaller value would have caused overfitting with the ANNs employing the LM algorithm, and so, with a defective generalization as outcome.

As mentioned previously, two types of training algorithm were considered, ANN-LM and ANN-BR, and each type was checked with different topologies. For every training configuration five runs were launched, with fresh random initialisation of weights each. This precaution is recommended to estimate precision and to give confidence to the final calibration model. Moreover, it permitted us to check the effect of the initial conditions on the ANN performance.

Testing

The modelling ability of each proposed ANN configuration was assessed with the remaining 50% of the data not used for training. This performance was numerically quantified by checking individual values, as the sum of relative absolute error, RAE(%), calculated according to:

$$ {RAE(\% ) = {{{\left| {\;Actual\;ANN\;output - Desired\;ANN\;output\;} \right|}} \over {Desired\;ANN\;output}} \times 100} $$

where the outputs refer to concentration values for the two ions considered. The choice of this relative weighted expression is caused by the relatively wide range of concentrations spanned, close to five orders of magnitude.

Optimisation of the ANN configuration

A light training procedure enabled fixing of a base number of neurons in the hidden layer for each type of ANN training algorithm. These were 20 and 50 neurons for the BR and LM types respectively.

Once the base dimensions of the topologies were set, these were studied with a greater detail at three selected values (base plus base±50%), but also considering the rest of factors defining the ANN configuration. Apart from the topology and training algorithm considered, these included the two types of transference function assayed. The total amount of 12 configurations were outlined as Table 2 shows. The table also adds the SSE obtained at convergence, or at the arbitrary ending (500 epochs) in the case of ill convergence.

Table 2. The 12 ANN configurations evaluated: two training algorithms, Bayesian regulation (BR) and Levenberg–Marquardt (LM), different topologies and two transfer functions. The general performances obtained in their training are also given

Each of the 12 ANNs was trained five times, randomly initialising each time the values of the weights of the neurons with the purpose of obtaining a precision estimate for the model. Each network, trained with the presented criteria, was evaluated or tested afterwards with the data set not participating in training. The predicted values obtained were used to calculate an RAE (%) error (relative value). Table 3 presents information related to this feature, the range of observed dispersion of errors for the different training variants and for the two considered ions, NH4 + and K+. For some of the cases evaluated, particularly the LM-T-50 and LM-T-75 configurations, extreme dispersion values could be detected. For the rest of the ANNs tested, the mean error did not reach 8%.

Table 3. Ranges of relative absolute error, RAE (%), values, expressed as percentage, obtained in the test stage of the different evaluated ANNs, corresponding to five consecutive trainings employing random initial weights

For the rest of the study two ANN configurations were selected if lower RAE (%) values for the external test set were to be obtained. These are visualized in Fig. 2, which shows the error quantifier for each considered ion. From this figure, and from the performance data in Table 3, it can be deducted that the LM ANN configurations, which are those employing the Levenberg–Marquardt training algorithm, require fewer iterations for convergence than those employing Bayesian regularization (BR) strategies. Indeed, some of these (BR-T-10 and BR-T-30) did not reach convergence.

Fig. 2.
figure 2

Relative absolute error, RAE, values, expressed as percentages, obtained for the external test set not participating in the training process for the different ANN configurations evaluated. Minimum, average, and maximum RAE values are indicated. The observed ranges correspond to five randomly initialised training processes

However, in the external test, the ANNs of the BR type were shown to be a better calibration model for the sensor array than the LM type, as observed in Table 3. From the values in Fig. 2 it can be stated that the average RAE (%) for the different BR networks employing the linear transference function at the output layer were, in all cases, below 1%, a remarkable result. This goal, was never achieved by any of the LM configurations. The reduced number of neurons used in the hidden layer of the BR type networks is an advantage which should simplify its implementation in a portable electronic system for an in-situ, out of laboratory, use. This advantage becomes clearer, when compared with the LM-S-75 configuration (the LM network with the best performance), which employs a greater number of neurons.

Additionally, the results obtained during the test stage show that the LM networks tend to present overfitting when compared with the BR type. The generalization ability of the BR networks is better than the LM, as seen in the external test stage. Nevertheless, the latter present good learning potential, as demonstrated with their lower training errors, SSE.

A further consideration is the effect of the transference function in the output layer. With both types of network the modelling was better when the function was of the satlins type, the linear one. A reflection can be placed here, recalling the fact that the models built employing ANNs have a high empirical foundation. These are the intuitive interpretations that the hidden layer is performing a classification of the data (not linearly separable), and the output layer is performing their fine quantification.

Validation of the model

According to the results obtained, the network that best modelled our potentiometric sensor array, or electronic tongue, was the BR-S-20 type. This is, a neural network with 20 neurons in its hidden layer, using a linear satlins transference function in the output layer, and employing Bayesian regularization as the learning strategy. Figure 3 summarises the global behaviour in modelling the system, both in training and external test stages for the two ions considered: NH4 + and K+ . Satisfactory prediction ability is found for ammonium alone, potassium alone and the mixture. Figures 4 and 5 present the correlations between obtained (y) and expected (x) values for each point used in the external test set, individualised for ammonium and potassium ions. The accuracy in the response demonstrated by the sensor array for the external test set is similar for the NH4 + ion y=1.000x +0.0272 (mmol L−1) and the K+ ion y=0.993x–0.0325 (mmol L−1). For the training set the corresponding values are comparable, although this is already expected and less objective, considering these make up the information used in the learning process. In the case of training, the comparison line for the NH4 + ion is y=1.000x−0.00209 (mmol L−1); for the K+ ion it is y=1.000x–0.00324 (mmol L−1).

Fig. 3.
figure 3

Simultaneous visualization of the goodness of fit finally achieved with the best configuration of ANN tested (BR–S–20) for the two ions considered. The represented pairs are the complete data set (targets): open circles, true concentration values; crosses, values obtained for the training set; filled circles, values obtained for the test set

Fig. 4.
figure 4

Correlation between expected and obtained concentration values for the ammonium ion. The dotted line represents the theoretical comparison line y=x. The ANN used was the BR–S–20; R is the correlation coefficient

Fig. 5.
figure 5

Correlation between expected and obtained concentration values for the potassium ion. The dotted line represents the theoretical comparison line y=x. The ANN used was the BR–S–20; R is the correlation coefficient

As a kind of further validation, the performance of the selected network, the BR-S-20 configuration, was compared with two GR networks. From these, one was for NH4 + ion and the other for K+ ion, and both were trained with the full data set. Each of these networks, of probabilistic nature, used 174 neurons in its hidden layer, and it is known to attain a very good modelling ability. The average RAE (%) values obtained with these networks were 0.593% and 0.552% for NH4 + and K+ ions, respectively. The corresponding values for the selected network, BR-S-20, were 0.6916% and 0.5366% for the test set—of comparable magnitude.

In order to show a typical application in the laboratory, seven synthetic samples were prepared by appropriate dilution of standards in an imidazole background, and processed as proposed above. Employing the network model BR-S-20 and the learning process performed before, quite close results to the expected values were obtained in all cases for both ions considered. These results are summarised in Table 4.

Table 4. Results obtained in the determination of ammonium and potassium ions in synthetic samples employing the proposed electronic tongue with the final network BR-S-20. Mean and dispersion values calculated from seven independent training processes, of 500 epochs each. Mean SSE of the seven training stages=0.00353

Finally, to illustrate the effort involved in the described approach, the different time intervals needed for each stage can be established from the studied case. These are:

  • obtaining the initial departure information, which is estimated as around 2 days (this figure can be extrapolated to any case involving two ions);

  • optimisation of the ANN, which normally takes around 3 days of computer analysis, but can be reduced with directions similar to those explained in this work, and finally

  • once the network is trained, the final readout for a new set of samples, that can be immediate.

In the conditions employed here, and using modern Pentium-processor computers (processing frequency larger than 1 GHz), a single training process is completed in less than 30 s.

Conclusions

An electronic tongue for simultaneous determination of ammonium and potassium has been developed and optimised. This is achieved by a direct measurement process employing an ISE array and advanced processing tools, artificial neural networks. The best variant among these involved training employing a Bayesian regularization algorithm. A double criterion for the optimisation was taken. First, networks with a good training ability are needed, i.e. with a correct description of the underlying model and with reduced final errors. And, secondly, a good generalization potential is also sought, to yield comparable performance for training and test data. With the final configuration, the performance in the prediction of the concentration values of both ions is outstanding, with average relative errors (RAE %) around 1% for the different training and validation subsets considered. Extra validation using a few separate synthetic samples also shows comparable results, with almost all errors below 1% of the considered concentration range. The learning strategy used, Bayesian regularization demonstrates clearly better performance than the more established Levenberg–Marquardt algorithm. Besides, the final optimal topology is of reduced complexity (only one hidden layer formed by 20 neurons), facilitating future hardware implementation for a portable analyser for in-situ use or for a base station with radio or SMS mobile phone link. The possibility of creating a higher-order system capable of assaying a greater number of ions is clearly apparent—just by increasing the number and variety of sensors used in the array. The chief difficulty can be foreseen as the exponential growth of amount of information needed for a proper modelling of the system, where a training set can reach several hundreds of standard solutions. Besides, for correct application of the proposed methodology with real samples a reference buffer is recommended for conditioning and pre-treatment, together with the use of relative measurements in comparison with the same reference solution. As a whole, the presented results summarize an interesting advanced strategy for the simultaneous quantification of different species in solution able of an easy automation, in what is already called the electronic tongue.