1 Introduction

Micro-milling manufacturing operations exploit reduced-sized end mills (tool) at high rotational speeds for producing complex-shaped components. Consequently, the dynamics and the cutting coefficients vary due to the elastoplastic behavior of the workpiece’s material and the required high-speed values [1]. In addition, chip thickness variation can occur for a specific combination of process parameters due to the modulations left on the surface during the successive cuts (as illustrated in Fig 1). This complex interaction yields large forces and displacements that can promote chatter. This self-exciting vibration affects the surface finishing and reduces the tool life, affecting productivity.

Some authors have proposed modeling strategies aiming to monitor and predict this instability. For instance, Afazov et al. [2], and Jin and Altintas [3] proposed different chatter modeling approaches considering nonlinearities in the micro-milling cutting forces and damping process, respectively. Recently, Lu et al. [4] included the centrifugal forces and gyroscopic effects caused by the high-speed rotation of the micro-milling spindle in their modeling strategy, aiming for a more realistic representation of this phenomenon. Due to the variation of the dynamics, Graham et al. [5] presented two novel robust chatter stability models considering uncertain parameters. Recently, Mamedov [6] revised different modeling techniques related to the micro-milling process. One can conclude from the proposed models modeling the dynamics of a micro-milling process is a challenging task.

Therefore, model-free alternatives for monitoring and predicting the occurrence of chatter have also been investigated since a proper selection of cutting parameters can ensure a chatter-free cut [1]. These alternatives require the use of dedicated instrumentation as dynamometers, accelerometers, cameras, and microphones. Moreover, the monitoring and predicting strategies should adequately process the data acquired by these devices. For instance, Chen et al. [7] exploited Support Vector Machines (SVM) for detecting chatter using image features captured by a camera. This method is highly influenced by noise; therefore, it requires extensive data treatment, which might jeopardize its in-process applicability. The online chatter detection method proposed [8] involves the use of piezoelectric actuators to excite the system externally. Despite its accuracy, the use of external actuators is a significant drawback in this proposal. Li et al. [9], and Yuan et al. [10] used accelerometers for measuring the vibration signals during the machining process. A shift of the dominant frequency components and a sharp vibration amplitude can be perceived during chatter occurrence. Due to these phenomena, Li et al. [9] used a dimensionless chatter indicator based on revolution root-mean-square (RMS) values for evaluating the stability of the micro-milling process. Alternatively, Yuan et al. [10] proposed a novel chatter-detection method based on wavelet coherence functions for the same evaluation. The authors concluded that the wavelet coherence functions of two orthogonal acceleration signals are sensitive enough for chatter detection.

Acoustic emissions (AE), illustrated in Fig. 1, are the most common data during chatter detection in micro-milling operations. Inasaki [11] discussed the advantages of the AE sensor for tool condition monitoring highlighting its feasibility regarding sensor mounting and signal processing. These signals contain information about vibrations, tool-workpiece contact, surface integrity, and topography [12]. Other assets are that the required instrumentation is a non-invasive, low-cost alternative [13]. Due to these characteristics, it does not modify the system dynamics.

Fig. 1
figure 1

Illustration of successive cuts

In this way, time- and frequency-domain metrics using AE-signals aiming for chatter detection in micro-milling operations have been proposed. Filippov et al. [14] investigated AE and acceleration signals, concluding that the power spectrum signal is an appropriate metric for monitoring the fast-occurring changes in the cutting process stability. Ribeiro et al. [12] evaluated the time-domain metric proposed, namely chatter indicator, by Li et al. [9] using AE signals instead of acceleration data, concluding that this indicator is not directly applicable for this set of data. Therefore, Ribeiro et al. [12] proposed a metric based on the AE RMS values and evaluated it for two grain-sized materials during chatter-free and chatter cuts. Figure 2 shows the micro-milling experiments. Unfortunately, neither discussions nor evaluations of the required period for the data used to derive the metric are presented, jeopardizing any conclusion about the metric applicability for in-process chatter detection. Li et al. [15] claimed that fast Fourier transform and the time-domain RMS value of AE signals could be effectively used for detection of chatter in the robotic milling process. Still, no conclusions can be directly drawn for micro-milling operations.

Fig. 2
figure 2

Micro-milling machining experiments [12]

Due to the complexity of the dynamics involved in the micro-milling process, machine learning algorithms can be helpful for chatter detection since they have been successfully employed as classifiers. Inasaki [11] discusses the application of the artificial neural network (ANN) for identifying chatter in the machining processes using AE signals. Recently, Wang et al. [16] employed an unsupervised machine learning-based method for chatter detection in milling operations using time- and frequency-domain metrics based on AE, acceleration, and bending signals. The authors concluded that the signal fractal dimension is the best time-domain feature for training an accurate classifier. Regarding micro-milling operations, [17] used SVM for predicting the surface roughness, demonstrating the potential of these techniques. This work investigates the use of machine learning-based classifiers for chatter detection in micro-milling operations. We can indicate the most appropriate features and classifiers for in-process detection through this investigation. These classifiers require strategies with low computation effort; therefore, only time-domain features are investigated. Moreover, an optimizer exploits a sliding window strategy to acquire time-domain signals, maximize the classifier accuracy, and minimize false positive/negative rates.

This article is organized as follows. Section 2 presents some concepts regarding the machine learning-based classifiers exploited in this work: the Perceptron and the SVM. Both classifiers are supervised learning algorithms requiring data from the different classes during the training phase. Section 3 details the proposed in-process chatter detection technique. The authors investigated the performance of the proposed classifiers for two different materials: COSAR and UFG (workpiece in Fig. 2). Figure 2 shows the experimental setup, and details about this experimental campaign are given in Sect. 4. The results of this investigation are presented and discussed in Sect. 5. Finally, conclusions are drawn in Sect. 5.

2 Machine learning-based classifiers

Machine learning-based classifiers are capable of separating different data classes. For the sake of illustration, Fig. 3 shows the separability of two datasets characterized by two features, \(F_1\) and \(F_2\). While Fig. 3(a) illustrates linearly separable (LS) sets, Fig. 3(a) shows nonlinearly separable (NLS) ones. On the one hand, a straight line is capable of separating the datasets in Fig. 3(a), demonstrating that these are LS. LS can be classified by simple strategies which require fewer computation resources. There are several techniques for linear separation such as linear programming, artificial neural networks (ANN), Fisher’s linear discriminant method, among others [18]. On the other hand, the datasets illustrated in Fig. 3(b) cannot be separated by a straight line since they are NLS. Therefore, these NLS require other strategies, such as Multi-Layer Perceptron and SVM-based classifiers.

Fig. 3
figure 3

Class separability: a linearly separable and b nonlinearly separable sets

The chatter detection methodology proposed in this work employs two machine learning-based classifiers: the Perceptron, an ANN-based classifier often used to classify LS, and the SVM, which presents a satisfactory performance NLS.

In this work, nine features extracted from the datasets are used to compose input vector \(\mathbf {x}_i \in \mathbb {R}^N\) where N is the number of selected features used by the classifiers, \(i=1 \ldots n\), and n is the number of evaluated sets extracted by the sliding window algorithm. In other words, these vectors contain statistical features extracted from the AE signals acquired during the micro-milling of two workpieces with other sized-grain materials (COSAR and UFG).

2.1 ANN-based classifier: the perceptron

The use of less complex machine learning algorithms, such as Perceptron, is often possible to classify two classes of samples. The Perceptron is a supervised learning algorithm used in binary classifications. This learning algorithm converges in a finite number of iterations for LS. The user can explore this fact to verify the datasets’ linear separability since the Perceptron does not converge for NLS. Furthermore, the use of this supervised machine learning algorithm brings a computational advantage over the others because it is simple to be implemented [18].

The threshold function is used for classification to map the inputs \(\mathbf {x}_i \in \mathbb {R}^N\) to a single binary value, as illustrated in Fig. 4:

$$\begin{aligned} f(\mathbf {x})={ {\left\{ \begin{array}{ll} 1 &{} {\text {if}}\ {}_P{\mathbf {w}}^T \cdot \mathbf {x}_i + {}_Pb>0,\\ 0&{}{\text {otherwise}},\end{array}\right. }} \end{aligned}$$
(1)

where \({}_P\mathbf {w} \in \mathbb {R}^N\) is the weighting vector, \({}_Pb\) is the bias.

Fig. 4
figure 4

Illustration of a Perceptron-based classifier

2.2 Support Vector Machine (SVM)

SVM consists of a supervised method of machine learning widely used in problems involving classification. This algorithm aims to find a hyperplane, also known as hard-margin, that differentiates the data classes. For example, Fig. 5 shows two classes illustrated by circles and stars separated by a hyperplane. These classes are allocated in the plot according to two features, \(F_1\) and \(F_2\). The points closest to the hyperplane are called support vectors. Mathematically, the hyperplane can be described as:

$$\begin{aligned} \mathbf {w}^T \cdot \mathbf {x}_i + b = 0, \end{aligned}$$
(2)

where \(\mathbf {w}\) and b are the coefficients to be found during the training phase aiming the maximization of the distances between the hyperplane and the support vectors [19]. This distance is illustrated as the total margin in Fig. 5. This optimization problem can be described as:

$$\begin{aligned} &\min _{\mathbf {w}, \ b} \quad \mathbf {w}^T \cdot \mathbf {x}_i \\ &\text {subject to } \quad y_i(\mathbf {w}^T \cdot \mathbf {x}_i + b) \ge 1 \end{aligned}$$
(3)

where the bias b and the scalar \(y_i\) defines if the set \(\mathbf {x}_i\) belong to a specific class.

Some scenarios in the described algorithm cannot achieve satisfactory results. In these scenarios, the datasets are not separable by a hyperplane, and some techniques can obtain better results, namely: Kernel functions soft-margin SVM.

Kernel functions exploit an adimensional transformation for the input sets. Due to this transformation, the classes became linearly separable. Several Kernel functions are proposed in the literature, for instance, linear, polynomial, RBF Kernel functions [20]. This work investigates the soft-margin SVM considering the RBF Kernel function. Considering two input vectors, \(\mathbf {x}_{a} \in \mathbb {R}^n\) and \(\mathbf {x}_{b} \in \mathbb {R}^n\), the RBF Kernel function is calculated by:

$$\begin{aligned} K(\mathbf {x}_{a},\mathbf {x}_{b}) = e^{-\gamma \Vert \mathbf {x}_{a}-\mathbf {x}_{b} \Vert ^{2}}. \end{aligned}$$
(4)

where the term \(\gamma\) controls the flexibility of the function.

On the other hand, the soft-margin SVM modifies the maximum penalty value imposed for margins violations. This can be posed as a multiobjective problem aiming the maximization of the distances between the hyperplane and the support vectors, and the minimization of the misclassification error. This error can be quantified by taking into account the distance between the misclassified points to the margins, \(d_i\) in Fig. 5(b). Using the weighted sum method, this multiobjective problem can be posed as:

$$\begin{aligned} &\min _{\mathbf {w}, \ b} \quad \mathbf {w}^T \cdot \mathbf {x}_i + C \sum _{i=1}^{n} d_i\\ &\text {subject to } \quad y_i(\mathbf {w}^T \cdot \mathbf {x}_i + b) \ge 1 - d_i \end{aligned}$$
(5)

where \(i = 1 \ldots n\) and the parameter C, denoted as the box constraint, controls the trade-off between maximizing the margin and minimizing the number of misclassified points. An optimization approach can be used for choosing an appropriate value of C. This work uses soft-margin SVM with RBF Kernel function.

Fig. 5
figure 5

Illustration of a soft-margins SVM-based classifier

3 Methodology

This section describes our proposal for classifying the occurrence of chatter, an in-process chatter detection technique. Based on the inputs, \(\mathbf {x}\), machine learning-based classifiers should present a binary classification: chatter-free and chatter cuts. In other words, there are only two possible outputs for these classifiers. Our proposal should be as simple as possible, enabling in-process classification; therefore, it uses only time-domain signals. Moreover, to keep the generality of the proposal, only statistical features are extracted from these signals. According to the literature, these features, described in Table 1, are usually correlated with the occurrence of chatter.

These features are derived for time intervals, denoted as windows, with w data points. The algorithm shifts this window by a step of s data points as illustrated in Fig. 6 (arbitrary values are used in this illustration). This sliding window algorithm extracts n windows. The statistical features are derived for each window yielding the input vectors, \(\mathbf {F}_j \in \mathbb {R}^n\), where \(j= 1 \ldots 9\), as Table 1.

Table 1 Selected Statistical Features
Fig. 6
figure 6

Feature extraction using the sliding window algorithm and illustration of the input vectors \(\mathbf {F}_j\) where \(j = 1 \ldots 9\)

Figure 7 shows the matrix of data used for the training of the classifiers considering arbitrary values. For each window (as shown in Fig. 6), nine features are extracted (\(\mathbf {F}_j \in \mathbb {R}^n\), where \(j= 1 \ldots 9\) as described in Table 1). The output vector \(\mathbf {O_c}^M\) is associated with binary classification, i.e., the output values can be 0 or 1, depending on the event the occurrence of chatter. The value of \(\mathbf {O_c}^M(i) = 0\) for data acquired during a free-chatter cut and \(\mathbf {O_c}^M(i) = 1\) for data acquired during a chatter cut. The index M is related to the workpiece’s material. A classifier should be trained for each material.

Fig. 7
figure 7

Set of points used for training and testing classifiers

In this work, two sets of features are used as inputs: Features’ Set 1 and Features’ Set 2. Features’ Set 1 takes into account all the extracted features described in Table 1. Therefore, \(\mathbf {x}_i^{Set 1} = [\mathbf {F}_1(i) \ \mathbf {F}_2(i) \ \ldots \mathbf {F}_9(i)]\). Features’ Set 2 only considers two selected features, named \(\mathbf {F}^{*}_1\) and \(\mathbf {F}^{ *}_2\). Therefore, \(\mathbf {x}_i^{Set 2} = [\mathbf {F}^{*}_1(i) \ \mathbf {F}^{*}_2(i)]\). Batista et al. [21] proposed that the selected features should be strongly correlated with the desired output of the classifier (\(\mathbf {O_c^M}\)) and also weakly related to each other. They can be numerically evaluated by:

$$\begin{aligned} z^{M}_{j}=\rho ^M_{j}-\rho _{j} \end{aligned}$$
(6)

where M is the material, \(j = 1 \ldots 9\), \(\rho ^M_{j}\) is the Pearson correlation between the \(\mathbf {F}_j\) and expected classifier outputs \(\mathbf {O}_c^{M}\), and \(\rho _{j}=\sum _{k=1}^{9} \rho _{jk}\) where \(\rho _{jk}\) is the Pearson correlation between the features \(\mathbf {F}_j\) and \(\mathbf {F}_k\). The features with the two highest \(z^{M}_{j}\) values are selected to be used by the classifiers, the \(\mathbf {F}^{*}_1\) and \(\mathbf {F}^{*}_2\).

The performance of the classifiers can be improved by using optimal values for the window and step (see Fig. 6). The optimal values of the window (\(w^*\)) and step (\(s^*\)) are found by maximizing the function:

$$\begin{aligned}{}[w^*,s^*]=\underset{w^{M}_j \in [lw,uw], s^{M}_j \in [ls,us]}{{\text {argmax}} \ (ACC - FNR - FPR)} \end{aligned}$$
(7)

where lw and ls are the lower bounds and uw and us are the upper bounds of the decision variables. A Differential Evolution (DE) optimizer was applied to Eq. 7 and the results are discussed in the next section. Moreover, Accuracy (ACC), False Positive Rate (FPR), and False Negative Rate (FNR) are derived as:

$$\begin{aligned} ACC = \frac{TP+TN}{TP+TN+FP+FN}, \end{aligned}$$
(8)
$$\begin{aligned} FPR = \frac{FP}{FP+TN}, \quad \text {and} \end{aligned}$$
(9)
$$\begin{aligned} FNR = \frac{FN}{FN+TP}, \end{aligned}$$
(10)

where TP and TN are the true positive and true negative classifications, while FP and FN are the false positive and true negative ones, i.e., the misclassifications. These indicators are also used for evaluating the classifiers. If these performance indicators are satisfactory, we consider that the algorithms have converged.

The proposal considers the division of the data set, obtained by applying the sliding window algorithm, into a set that will be applied in the training phase and another that will be applied in the testing phase. For this purpose, the sets were randomly divided in the proportion 80-20%, respectively. In summary, Algorithm 1 shows the steps of the proposed technique considering both Features’ Sets.

figure a

4 Experimental data

Ribeiro et al. [12] carried out an experimental campaign for acquiring AE data from a micro-milling operation during chatter-free and chatter cuts. Therefore, the reader can find more details about this campaign in [12]. Hereafter, the most relevant aspects of this campaign are summarized regarding the proposal of ML-based classifiers.

Figure 2 depicts the experimental setup: a workpiece, an AE sensor, and the tool-holder. An adapted CNC milling machine Romi D800 with a position accuracy of 1 µm was responsible for the machining operations. These micro-milling operations exploited the sloth-cutting strategy. This cutting strategy removes material considering the entire tool diameter as the cutting width; therefore, the process can manufacture micro-channels with the tool diameter width.

The authors of this work manufactured four micro-channels (Ch1, Ch2, Ch3, and Ch4) using a 1-mm-diameter carbide endmill tool with two flutes. The channels are 26 mm long and 1 mm in width. The machining operator adjusted the cutting parameters to a 125 m/min cutting speed, 3µm/tooth feed, 100 µm depth of cut, and 1.0 mm width of cut. The tool’s flexibility and runout were not considered in the analysis, given that they did not influence the cutting dynamic. The feed marks measured by the 3D confocal OLS 4100 Olympus microscope on the slot floor reached 3 µm/tooth as programmed in the machining center. The endmill-workpiece engagement with low chip load and depth of cut decreased significantly such dynamic effect during the micro-milling.

The dimensions of the workpieces were 8\(\times\)26\(\times\)60 mm. We investigated two materials: (COSAR) biphasic low-carbon steel with a grain size of 11 µm, and (UFG) ultra-fined grain COSAR-60 with a grain size of 0.7 µm. Due to differences in their microstructure, the most relevant features for detecting chatter could be different, requiring the training of different classifiers for each material.

The AE signals were acquired during the machining processes by a piezoelectric acoustic emission (AE) commercial sensor with a dynamic response up to 1 MHz (see Fig. 2). A high-pass filter conditioned these signals with a 250 Hz cutoff and an amplification of 35 dB. A NI PCI-6251 board at the rate of 1.25 MHz was responsible for acquiring this data. These signals acquired during the micro-milling operations are depicted in Figs. 8 and 9. The occurrence of chatter is indicated in these figures. Chatter-free cuts are illustrated with the green color, while chatter cuts with the red color, as illustrated in Fig. 7.

Fig. 8
figure 8

Time domain AE signal from micro-milling of COSAR in a Ch1, b Ch2, c Ch3 and d Ch4

Fig. 9
figure 9

Time domain AE signal from micro-milling of UFG in a Ch1, b Ch2, c Ch3 and d Ch4

5 Results

Both Perceptron-based and SVM-based classifiers should detect the occurrence of chatter \(\mathbf {Oc}^{M}=1\), where M indicates the COSAR (\(M = COSAR\)) and UFG (\(M=UFG\)). These classifiers were trained and tested using Matlab. The output of these classifiers should be null otherwise (chatter-free cut). The data used during the training and testing phases are shown in Figs. 8 and 9. The number of data points for chatter or chatter-free cuts and the different materials is described in Table 2. These data are divided randomly between the training and the testing sets. 80% of this data is used in the training phase, while 20% in the testing phase.

Table 2 Number of data points for each event (chatter or chatter-free cuts) and materials
Table 3 Features with the highest \(z^{M}_{j}\) values given by Eq. 6

Two features sets are investigated in this work. Features’ Set 2 only considers two features, \([\mathbf {F}^{*}_1\)\(\mathbf {F}^{ *}_2]\). These features, described in Table 3, obtained the highest \(z^{M}_{j}\) values given by Eq. 6. This proposal obtained different relevant features for each material. Feature 2, the RMS value of signal amplitude, is the most relevant feature for both materials. This relevance indicates that this might be an important feature for other materials as well. The second most relevant features are Feature 3, the highest absolute value, and Feature 7, the variance of signal amplitude, for COSAR and UFG, respectively.

We derived Perceptron and SVM-based classifies according to the proposal given by Algorithm 1. An important asset of this proposal is the use of optimal values for the sliding window algorithm. In this work, DE algorithm found the window (\(w^*\)) and step (\(s^*\)) values that maximize Eq. 7. Table 4 gives these optimal values regarding the exploited set of features and the classifier. Figure 10 shows objective function values for 400 randomly selected decision variables. This figure demonstrates that a proper choice for window and step values can yield better classifiers’ performance. These optimal values should be used during the data processing being an important assessment for the industrial viability of the proposal.

Table 4 Optimal values of step and window
Fig. 10
figure 10

ACC-FNR-FPR (Eq. 7) values for randomly selected step and window values

Table 5 Classifiers’ performance for both events and for COSAR material
Table 6 Classifiers’ performance for both events and for UFG material

The classifiers’ performance is summarized in Tables 5 and 6 for the COSAR and UFG, respectively.

Considering Features’ Set 2, \([\mathbf {F}^{*}_1 \ \mathbf {F}^{*}_2]\), Perceptron does not converge for both materials. This lack of convergence indicates that these sets are not linearly separable. One observes this fact in Figs. 11 and 12. The chatter and chatter-free cuts’ classes and the straight line derived by the Perceptron are depicted in both figures. However, this straight line cannot separate the classes as some points cross the straight line invading the other class area. However, perceptron converges for the Features’ Set 2, \([[\mathbf {F}_1 \ \mathbf {F}_2 \ \ldots \mathbf {F}_9]]\), achieving an accuracy of 100 %.

Fig. 11
figure 11

Nonlinearly Separable Classes—COSAR Material

Fig. 12
figure 12

Nonlinearly Separable Classes—UFG Material

SVM-based classifiers achieved good performance indexes for both Features’ Sets. For COSAR, the SVM parameters were C = 950.52 and \(\gamma\) = 2937.64, and UFG, C = 26.96 and \(\gamma\) = 8376.4.

These results are undoubtedly revealing since the two strategies performed adequately:

  1. 1.

    One can spend some computational effort selecting relevant features and exploit a more complex classifier: the SVM-based classifier; or

  2. 2.

    One can spend no computational effort selecting features and exploit a simpler classifier: the Perceptron-based classifier.

Figure 13 illustrates the in-process chatter detection strategy using a Perceptron-based classifier’s output for the data acquired during the cutting process of Ch4 of the COSAR workpiece. A classifier’s output is expected at every step; therefore, the proposal is suitable for in-process chatter detection. AE data have been acquired during this cutting process during chatter and chatter-free cuts. Two images of the surface topography considering both cutting conditions are also shown in Fig. 13, demonstrating the occurrence of chatter during the first part of the process.

Fig. 13
figure 13

Illustration of the in-process chatter detection strategy: Perceptron-based classifier’s output for the Ch4 of COSAR material

A significant drawback is a necessity of acquiring data for the training phase. However, it is important to highlight that the classifiers were trained with a small dataset, usually available in industrial environments.

6 Conclusions

In this work, we investigated the use of machine learning-based classifiers for chatter detection in micro-milling operations using acoustic emission signals. Data from chatter and chatter-free cuts were acquired during the micro-milling of two workpieces of COSAR and UFG using a sloth-cutting strategy.

These data are processed using a sliding window strategy that divides the complete set into several data packets. Statistical features are derived from these packets. Two case studies are investigated: (1) a features’ set composed of nine features and (2) a features’ set composed of the two most relevant features. The most relevant features should be strongly correlated with the desired output of the classifier. Moreover, optimal parameters for the sliding window strategy are found using a DE optimizer.

These data are used for training and testing two supervised machine learning classifiers: a Perceptron-based classifier and an SVM-based classifier. The former converges if the classes are linearly separable, while the latter considers misclassifications.

Through this investigation, one can conclude that the RMS value of the amplitude signal can be a relevant feature for detecting chatter. Nevertheless, this choice is dependent on the material under investigation. The time-domain signals should be acquired and processed according to the optimal window and step values for maximizing the classifiers’ performance. A classifier’s output is derived at every step; therefore, the proposal is suitable for in-process chatter detection.

A trade-off between the number of features and the complexity of the classifiers was identified. Perceptron-based classifiers converged for both materials using the nine proposed statistical features. While SVM-based classifiers can adequately detect chatter using only the two most relevant features or the larger set of features.

This work exploits supervised machine learning classifiers using time-domain data. However, the training phase of these classifiers requires AE data from chatter and chatter-free cuts, which might cause undesired disruption. Therefore, unsupervised methods should also be investigated since they only need AE data from chatter-free cuts. Thus, the use of unsupervised classifiers could enhance the usability of the proposal. Moreover, industrial data, described in time and frequency domain, should also be exploited, promoting other ways of detecting chatter.