1 Introduction

For any computer vision application [2], image segmentation is an important preliminary processing operation. To determine the dental problems like tooth fester, tooth pimple and periodontal bone loss or gum diseases, dental medical techniques radiography (X-ray images) [1] are used. To recognize the spoil images proper segmentation [14, 27, 36] is required. In case of fire victims, biometrics in dental area helps to identify the victim. Such system needs additional steps to execute. Nowadays, in biometrics, radiograph images are used. Cone beam computed Tomography (CBCT), Tuned vent computed tomography, charge-coupled tool establish images acquire through the use of a Trophy RVG-UI sensor [19, 20] and periapical radiographs are the important techniques in dental treatment [30]. Useful diagnostic images [16] are produced by the current generations of CBCT systems. Tuned aperture computed radiography [7] produces digital images that find the implants position related to the pretended and that yielded a series of tomographic slices viewed interactively. From ear to ear in an isolated radiography, it is used to analyse various oral problems e.g. affected teeth and shows extensive scanning of dental X-ray of two-dimensional view of the jaws with complete teeth in both upper and lower jaws, tissue and helping structure around it, which is known as orthopantomogram [6]. Also, recent medical articles have evaluated automatic and semiautomatic segmentation methods performed by thresholding algorithms, which could expedite the diagnostic procedure [4]. The threshold evaluation to discover the main dental image portion basically with image noises is the main disadvantage in the thresholding group. Thresholding the image into two segments and reserving label to each segment that is for “main part” and for “background is the main objective in segmentation.

In Segmentation process, an image separated from its location or objects that have same attribute or behaviours. The image segmentation methods are categorized based on two properties i.e. Discontinuity and similarity. The typical image segmentation methods are edge detection method, region growing method and threshold method. The final method is uncomplicated, that is extensively castoff in segmentation of image, simple to accomplish, and its presentation is even. In Otsu method, the basic principle has the extreme amongst-class variation amid the target and background that gains the reliable segmentation product is choose for an optimal threshold and (Otsu algorithm) [35, 37] is commonly used for image segmentation. Also, this segmentation concentrated on an image regions as teeth-parts removing and lesion-region localization [18]. For many applications, fresh sets are used to obtain a complete segmentation and the main drawback is the correct segmentation is nominated because edges are not always available [23]. Segmentation of dental radiograph is not an easy process. The fields in which segregation are used frequently are: medicinal applications (fuzzy algorithm based segmentation for navigation DXRI segmentation, resonance images, and angiographic images ventricular segmentation), segmentation of satellite image, segmentation of face image etc., [9]. The results are needed to recognise with few data which is more than fuzzy logic system [13, 32, 33]. Many problems arise in the radiograph images such as low variance, noise and uneven illumination. Region segmentation is processed on pixels, Boundaries and regions. Radiographs in dental image are necessary in oral analytical methods. For an oral diagnostic mechanism dental radiographs are essential. The main aim of radiograph segmentation is to constrain each tooth region in a dental X-ray image.

X-ray penetration across teeth and helping arrangements, dental radiograph photographic images are produced. Poor quality and low contrast, and uneven subjection are the core problems that difficult the task of segmentations. Periodontal diseases which are the most general teeth problems and thus radiographic segmentation thus for example help the dentist to detect/identify dental caries. To analyse teeth images for getting valuable data towards the diagnosis of medical support systems with another tools, it has been suggested that DXRI segmentation is one of the most important. Proper photograph locating, suitable X-ray showing and correct photograph processing technique are the main features for the production of diagnostic standard radiographs. Non-diagnostic and below optimal radiographic image are produced if the error in any of these factors. Thus many conditions can be recognized by the dentist otherwise might remain undetected by the diagnostic standard image through segmentation. Enhancing the presentation of the clustering algorithm the users provide new information to attach an input of the algorithm. To obtain maximum segmentation accuracy, specialized data mining methods for DXRI segmentation have been analysed. Challenges in finding factors or general limits of teeth samples are the major issues in existing clustering and image processing algorithms. Determining the real spectral grouping existing in a data set. There are numerous clustering algorithms that can be used.

1.1 Contribution of the work

In the proposed work, the HTGkFCM-Otsu has benefits over FCM in terms of low computational time and simplicity. Likewise, HTGkFCM is castoff towards improving outcomes being attained through the FCM with pre-defined membership matrix. A new collective framework to the present of this work is that collaborate Otsu gateway method, FCM and semi-supervised Hyperbolic Tangent Gaussian kernel fuzzy C-Means clustering (HTGkFCM). For segmenting the Dental X-ray images, a novel HTGkFCM-Otsu method is proposed. In k-means algorithm data point must exclusively belong to one cluster centre and in case of FCM, Data point allocated in the membership of each cluster centre as a product of which more than one cluster centre the data point is belonged. The hyperbolic tangent Gaussian kernel method associates the input data space nonlinearly into attribute space having high dimensional and less sensitive to noise with robustness. The disadvantage of the FCM algorithm is overcome by the kFCM algorithm and adds kernel information to the standard fuzzy c-means algorithm.

Subsequently the remainder of this paper is organized. Reviews of the process associated to the segmentation of the dental radiographs are described in section 2. The methodologies proposed for improving the exactness of segmentation are reported in section 3. The experimental setup of our calculation is presented, strategies for combining relative criteria are examined and the obtained products are reviewed in section 4. The Final conclusion of our paper is addressed in section 5.

2 Related works

The segmented fats were termed as stand isolated from each other by the pericardium, mediastinal and epicedial. For minimal user involvement much effort was devoted. Rodrigues et al. [25] have proposed a quantification of two kinds of unified methods and cardiac fats for an independent processing. To perform proper segmentation, the methodology proposed contains classification and registration algorithms. They associate the presentation of various algorithms with classification and processing, containing decision tree algorithms and probabilistic models neural networks. Regarding Mean accuracy, in both visceral pericardium and if the features are normalized and its mediastinal fats is 99.5%, by showing an experimental results of a mean true positive rate of 98.0%. It attains 97.6% dice similarity index. By using three sequential and three rotational calculations, the effective medical plan was contrasted with the differentiate result and with the postsurgical expansion acquired an oblique surgical expansion. Analysing of both inferior and superior maxillary repositioning process in the impact of maxillary segmentation. Following to virtual surgical planning in 30 patients, K. Stokbro et al. [28] have calculated the positional and precision accuracy of various orthographic procedures. Till the segmentation effect on transverse expansion and positional accuracy there was no research of three-dimensional virtual surgical planning. With a large standard deviation, an enhanced degree of sequential exactness among arranged and postsurgical outcomes was found. Rotational difference showed an effect on the maxilla while improving the pitch. On maxillary placement, the segmentation had no important influence. In inferior maxillary repositioning a posterior movement was observed. The observation of a shortage of crosswise expansion in the segmentation is free from the degree of expansion. In the event of long-bone DXRI an integrated approach for the identification and calculation of orthopaedic fractures is proposed in Bandyopadhyay et al. [21].

An automated fragmentation is an important tool in the computer-aided tele-medicine system. For the using of paramedics or specialist doctors, a software tool has been developed by them that could be conveniently used. From a digital geometry many idea such as concavity index and comfortable straightness are used to observe fracture locations and type and also utilized to correct outline imperfections. So the DXRI show suitable product on the experiments of different databases. This proposed methodology, the bone region of a DXRI is segmented from its nearby flesh region and then using an adaptive thresholding approach called the bone-contour is developed. Because of the segmentation errors, alteration of bone-contour discontinuities are performed in unsupervised approach that might have been developed and finally, in the bone the presence of fracture is recognized. For an easy visualization of the fracture, the method could also condition on the line-of-break, the extent of damage is assess and its orientation is found. By the physiological confusion of granulation and endoxeros is the Oranges and lemons can be forced this leads to reducing their financial value. The disorders can be discerned by produce images of the internal construction of citrus on the X-ray radio graph. Dael et al. [10] have proposed to classify specimen as being mannered or not and to find these issues on the projected X-ray images.

A set of image attribute has been evaluated and using a naïve Bayes or kNN classifier, classification is done and this approach automatically segments healthy and forced tissue. For all fruits, non-destructive inspection the elaborated method permits while check failures due to negative sampling and avoids the need for destructive labour demanding sampling. The proposed algorithm classifies 93.6% of lemons and 95.7% of oranges correctly. To any existing inline X-ray radiograph equipment the classification method can be applied and it is fast and strong to noise. The subsequent evaluation are covers in TILT. By a fracture-dilation method first, tradition masks are developed for local thresholding which particularly intensifies the fracture signal on the power histogram. In the rock matrix to determine the fractures from vent the multi-scale Hessian fracture (MHF) filter has been incorporated and therefore for fracture behaviour in granular rocks the second TILT was particularly is well suited.

Hang Deng et al. [12] for quantification and visualization of rock fractures and for managing 3D X-ray computed tomography (xCT) images.

Presented a new method the Technique of Iterative Local Thresholding (TILT) to minimise the involvement of humans and facilitating automated processing of large 3D datasets, the thresholding and fracture separation steps are wrapped by the third TILT for binary segmentation in an optimized frequentative routine.

As an explanatory sample, to the 3D xCT images the TILT was applied of unreacted and proceed fractured limestone cores. Other segmentation methods in image processing were also pertained to offer perceptions regarding to variability. In automation the result outperformed the existing methods, and was completely effective in segregating fractures from the porous rock matrix and TILT significantly separate isolation of gray scale intensities. Because the other methods possess constrained ability in differentiating fractures from pores, larger fracture volumes (up to 80%) are measured by those methods, roughness (up to a factor of 2) and surface areas (up to 60%) since the new approaches have the possibility to misclassify fracture edges as void.

In fracture geometry as determined by 2D flow simulations these differentiation could possibly lead to great differences in hydraulic permeability predictions.

Problem statement

Mostly, segmentation of dental X-Ray imaging process used a Fuzzy C-Means (FCM) algorithm. FCM is based on the Fuzzy set theory and in which it uses soft clustering methodology. Here the major problem related to this FCM is it has the complexity in segmentation time and noises. In order to overcome these type of challenges, here we introduce HTGkFCM-Otsu method.

3 Proposed Otsu THRESHOLDING based hyperbolic tangent GAUSSIAN kernel FCM

Here, we present a novel approach named as HTGkFCM-Otsu. In this approach, a semi-supervised fuzzy clustering algorithms is applied to DXRI segmentation and its process flow is represented in Fig. 1.

Fig. 1
figure 1

Process flow of the proposed method

In this approach fuzzy clustering of DXRI segmentation is being focussed. Background area and the main part from a DXRI are attributed by the Otsu method. After the previous steps results, the FCM algorithm is selected to cluster the domain of Dental Structure. The output is improved by the clustering methods and with robustness Semi-supervised Hyperbolic Tangent Gaussian Kernel Fuzzy Clustering algorithm (kFCM) is determined to illuminate less sensitive to noise. The final results of segmentation is calculated by the semi-supervised fuzzy clustering algorithm in a functional way. The integration of the Fuzzy C-Means (FCM), Otsu approach along with the HTGkFCM processes is applied to deal with the constraints of independent approaches.

The objective function of Otsu based Hyperbolic Tangent Gaussian Kernel FCM as,

$$ f\left(U,V\right)=\min \left\{{F}_m\left(U,V\right),{F}_m^{HTGk}\Big(U,V\Big)\right\} $$
(1)

In this work the foremost goal is to enhance the precision of segmentation because it decide the success or failure of the final exploration process. In order to make our objective true we have to take the following methods.

To obtain an accurate segmentation results, the two clustering methods are combined in the proposed approach.

The process flow of our method is represented in Fig. 1, that demonstrates pre-processing, FCM, and semi-supervised clustering (HTGkFCM). To avoid the uncertainties at first, from the input image the background is detached then next segmentation is done for the three remaining regions. The FCM has some constrains i.e. sensitivity to noise and random initialization to deal with these median filter is employed for noise reduction before subtractive approach is being applied which is applied to identify the appropriate values of cluster centre. The FCM comes under the unsupervised clustering type and performance degradation occurs because of the unlabelled data’s. This trouble of FCM could be avoided by a semi-supervised clustering approach based on hyperbolic tangent to segregate the brain images into tissues. The proposed approach’s detailed description is given below.

3.1 Pre-processing stage (De-noising)

This phase targets to improve the quality of image through eliminating noise then standardizing the image pixels intensity. The guided filter that conserves edges though eliminating artefacts. This changes every neighbourhood window centre pixel of an m × m through guided image filtering. Here a guided filter with the window size of 3 × 3 is cast-off towards computing the rate of output pixels. With the guidance I a local linear pattern besides the filtering results X is the important hypothesis of the guided filter [15]. Consider, a right change of I in a window wK positioned at the pixel K is represented as X:

$$ {X}_i={A}_K{I}_i+{B}_K,{\forall}_i\in {w}_K $$
(2)

Here (AK, BK) are few linear coefficients thought chosen to be consistent in wK. A square window is utilized with radius r. Since ∇X = A ∇ I, q has an edge while I has an edge and is guaranteed by local linear pattern. The linear coefficients (AK, BK) are determined using the limits from the filter input P. The input P is subtracted from undesirable factors N such as, noise or textures to form the output X:

$$ {X}_i={P}_i-{N}_i $$
(3)

A solution is needed to reduces the variations between X and P but the linear pattern should be preserved at the same time. In particular, we reduce the associated cost function in the window wK:

$$ e\left({A}_K,{B}_K\right)=\sum \limits_{i\in {w}_K}\left({\left({A}_K{I}_i+{B}_K-{P}_i\right)}^2+\in {A}_K^2\right) $$
(4)

Where, a regularization factor is represented as ∈ which penalize expansive AK. Equation (4) presents the linear ridge regression model with its clarification.

$$ {A}_K=\frac{\frac{1}{\left|w\right|}{\sum}_{i\in {w}_K}{I}_i{P}_i-{\mu}_K\overline{P_K}}{\sigma_K^2+\in } $$
(5)
$$ {B}_K=\overline{P_K}-{A}_K{\mu}_K $$
(6)

In which, μK and \( {\sigma}_K^2 \) represents the mean and variance of I in wK, |w| denotes the amount of pixels in wK, and \( \overline{P_K}=\frac{1}{\left|w\right|}{\sum}_{i\in {w}_K}{P}_i \) is the mean of P in wK. The acquired linear coefficients (AK, BK) are used to calculate the filtering result Xi by (2). In any case, a pixel i is associated in every one of the overlying windows wK which covers i. Thus, when the estimation Xi provided in (2) calculated for various windows, it does not give the identical results. Hence, all the feasible estimations of Qi are simply averaged. Then, the filtering output is computed after the calculation of (AK, BK). wK is the image for all windows and is given by.

$$ {X}_i=\frac{1}{\mid w\mid}\sum \limits_{\operatorname{}K\mid i\in {w}_K}\left({A}_K{I}_i+{B}_K\right) $$
(7)

By seeing, \( {\sum}_{\left.K\right|i\in {w}_K}{A}_K={\sum}_{K\in {w}_i}{A}_K \) because of the symmetry of the square window, we change (7) by

$$ {X}_i=\overline{A_i}{I}_i+\overline{B_i} $$
(8)

The mean coefficients of every windows overlapping i is stated as \( \overline{A_i}=\frac{1}{\left|w\right|}{\sum}_{K\in {W}_i}{A}_K \) and \( \overline{B_i}=\frac{1}{\left|w\right|}{\sum}_{K\in {W}_i}{B}_K \).

Since the linear coefficients \( \left(\overline{A_i},\overline{B_i}\right) \) shift spatially, ∇X never again scaling of ∇I with the alteration in (8). In any case, the gradients of the mean filter can be predictable to be very smaller than that of I closer to solid edges similar to the output \( \left(\overline{A_i},\overline{B_i}\right) \). Herein this circumstance still we could maintain, \( \nabla X\approx \overline{A}\nabla I \) which describes that sudden intensity variations in I can be regularly conserved in X.

3.2 Otsu method

The Otsu approach efficiently decides the background/main parts among the image processing depends on pixel [24] and for fast processing the Background domain is eliminated from a DXRI. An image can be separated into background and main parts of two regions for Otsu method. Teeth’s density is similar to the bone that’s why soft area as the background and the main parts are the bone and teeth. Usually, dental X-Ray image can be separated into bone and teeth depend upon its distributing density with low, medium and high density and the area into the soft domain. The black and white pixels inner class changes should be minimised for this an image based on a global threshold (T) can be partitioned into two regions. For this classification a label should be assigned to every pixel as the background areas and the main part as,

$$ F(x)=\left\{\begin{array}{l}{I}_o\kern1.08em if\kern1.08em g(x)\ge T\\ {}{b}_o\kern0.96em if\kern1.08em g(x)<T\end{array}\right. $$
(9)

In Otsu, every pixel is labelled relied on its grey value g(x) as background area bo and image area Io. In dental image, black is background area and white is the object area. Here the range of pixel intensities are 0 and 255. So, in this work we selected the threshold value T as 127. If the g(x) value is greater than T means it selected F(x) as image area Io and if the g(x) value is less than T means it selected F(x) as back ground area bo.

The additional information in a semi-supervised fuzzy clustering is exploited to lead, control and supervise the clustering process.

Semi-supervised learning approaches generally use pairwise constraints or partial labelling to transfer expert knowledge into process of clustering. It has alternative and consensus clustering collect data from some partitions of data to a single common assessment. Here in this methodology, we have focussed on the stated issues: Exactly how, we have to evaluate the clustering model that has to conserve a fixed amount of data about existing categories. In addition to that, focusing on defining clusters that are very probable towards belonging to only one class. Alternatively, more number of algorithms related to clustering usually use Euclidean distance for reflecting the link among occurrences, Such a Euclidean distance executes badly in fact while every features of the case is reliant on others. In our work, we using Euclidean distance for dental image segmentation with spatial information to provide less noise, in our proposed approach uses Gaussian kernel and Hyperbolic Tangent function. This is the semi supervised function and since it provides best partition into clusters. In addition to that, fuzzy number (each fuzzy memberships weighted exponent), that deals with the fuzziness of clustering algorithm and controls the competition between clusters. In addition to that, it has the gentle control on an image’s edge increase and decrease level on the sides of the centre decreasing the width of the edge to strengthen edge and this can be done by hyperbolic tangent function.

3.3 FCM

Any data about spatial background is not concerned by the standard FCM algorithm therefore its effort fit on the images that are noise-free, it is much knowledgeable to some of the imaging procedures and noise. A dental images foremost part is divided by FCM into dental and teeth structure domain. Algorithm for the FCM is given in Table 1.

Table 1 Algorithm for FCM based clustering

Extensive analysed and effective application in image segmentation and clustering uses Fuzzy clustering because it is the soft segmentation technique. The FCM algorithm due to its strong attributes is popular technique in general which is applied in the segmentation task. One the familiar and best method for clustering is the FCM algorithm. Bezdek [8] proposed integration point of a data element Xk to cluster jth represented by the term ukj was appended to the goal function in Eq. (10) is the fuzzy clustering problem. Dependent on the membership degree, a data element could belong to some clusters. A Standard of the partitioning is measured by the goal function to segregate a dataset into c clusters. The objective function is Fm(U, V) when the weight in the set of squared error is minimised, the algorithm produces an optimal c partition which is an iterative clustering approach.

$$ \mathrm{Objective}\ \mathrm{function}:{F}_m\left(U,V\right)=\min \sum \limits_{k=1}^N\sum \limits_{j=1}^C{u}_{kj}^m{D}^2\left({X}_k,{V}_j\right) $$
(10)

Wherever m is fuzzy number (each fuzzy memberships weighted exponent), that deals with the fuzziness of clustering algorithm and controls the competition between clusters. Where, r denotes the dimensions, the no of data elements are denoted by N and the no of clusters are denoted by C, ukj is the elements membership degree Xk in cluster j, Xk ∈ Rr is the main part of data k of X = X1, X2, ⋯, Xn and Vj is the cluster centre.

The constraints of the objective function is defined as,

$$ \mathrm{Constraint}:\sum \limits_{j=1}^C{u}_{kj}=1;{u}_{kj}\in \left[0,1\right];\forall =1,2\cdots, N $$
(11)

The cluster centre and the pixel distance is calculated using the Euclidean distance formula given below:

$$ {D}^2\left({X}_k,{V}_j\right)={\left\Vert {X}_k-{V}_j\right\Vert}^2 $$
(12)

A confined optimization problem is described in Function (10), employing the Lagrange multiplier techniques this problem can be switched to an unobstructed optimization problem the calculation is as follows.

$$ {V}_j=\frac{\sum \limits_{k=1}^N{u_{kj}}^m{X}_k}{\sum \limits_{k=1}^N{u_{kj}}^m} $$
(13)
$$ {u}_{kj}=\frac{1}{\sum \limits_{i=1}^C{\left(\frac{\left\Vert {X}_k-{V}_j\right\Vert }{\left\Vert {X}_k-{V}_i\right\Vert}\right)}^{\frac{2}{m-1}}} $$
(14)

3.4 Hyperbolic tangent Gaussian kernel based FCM (HTGkFCM)

High dimensional attribute distance for direct computation take less time.

The Fuzzy kernel c-means algorithm (kFCM) role is report below. The kernel procedure maps high dimensional feature distance from nonlinear input data space. It fundamentally makes the essential sensitive to the noise constrains because Gaussian kernel is expropriate for clustering. The disadvantages of the FCM algorithm is overcome by the KFCM algorithm and in this section a kernel information is included by the KFCM algorithm to the traditional FCM algorithm. Plain as well as linearly independent after the nonlinear transform of FCM datain the feature space product from the difficult and non-linear result. From the pre-defined membership matrix to simplify and upgrade the product the kFCM algorithm [34] is used. Across a non-linear transform, the input data is nonlinearly mapped by this method into a vast data space and then performs FCM in that attribute space.

Algorithm for the HTGkFCM is given in Table 2. In our attitude from the updated cluster centre of the FCM having the large membership value, the cluster centres (prototypes) are evaluated and the clustering process is convert into the kernel space. The FCM algorithms Kernel version has the objective function as,

$$ \mathrm{Objective}\ \mathrm{function}:{F}_m^{HTGk}\left(U,V\right)=\min \sum \limits_{k=1}^N\sum \limits_{j=1}^C{u}_{kj}^m\;Q\;\left({X}_k,{V}_j\right) $$
(15)

The fuzzy number is denoted as ‘m’. In the segmentation process by without considering spatial information, conventional kernel FCM algorithm is highly unprotected to noise. In terms of Euclidean distance for dental image segmentation with spatial information to provide less noise, in our proposed approach uses Gaussian kernel and Hyperbolic Tangent function. So that this method provides more efficiency and robustness.

Table 2 Algorithm for the HTGkFCM based clustering
$$ Q\;\left({X}_k,{V}_j\right)=\left(1-K\left({X}_k,{V}_j\right)\right)\left(1-H\left({X}_k,{V}_j\right)\right) $$
(16)

By using the Lagranian method, the centres and membership degrees are calculated as follows.

$$ {V}_j=\frac{\sum \limits_{k=1}^N{u_{kj}}^mK\left({X}_k,{V}_j\right)\;H\left({X}_k,{V}_j\right){X}_k}{\sum \limits_{k=1}^N{u_{kj}}^m\;K\left({X}_k,{V}_j\right)\;H\left({X}_k,{V}_j\right)};j=1,2,\cdots C $$
(17)
$$ {u}_{kj}=\frac{1}{\sum \limits_{i=1}^C{\left(\frac{1-K\left({X}_k,{V}_j\right)\;H\left({X}_k,{V}_j\right)}{1-K\left({X}_k,{V}_i\right)\;H\left({X}_k,{V}_i\right)}\right)}^{\frac{2}{m-1}}} $$
(18)

For minimalizing Fm(U, V) in Eq. (15) identified are revised in Eqs. (16) and (18) only if a Gaussian function is used by means of a kernel function K [31] with K(Xk, Vj) as,

$$ K\left({X}_k,{V}_j\right)=\varphi \left({X}_K\right)\cdot \varphi \left({V}_j\right)=\exp \left(\raisebox{1ex}{$-{\left\Vert {X}_k-{V}_j\right\Vert}^2$}\!\left/ \!\raisebox{-1ex}{${\sigma}^2$}\right.\right) $$
(19)

Where, φ(XK) is the massive feature space, the user defined function is σ2 and φ(⋅) is non-linear associate function. By varying the σ2 values there is a variation in the segmentation algorithms presentation. To fix the suitable value for σ2 this method is need. To smoothen the images, Gaussian function will reduce the noise influence. Gentle control of the images edges increasing and decreasing grey level of both sides of the centre minimising the edge width to reinforce the edge and this can be done by hyperbolic tangent function.

The hyperbolic tangent function [5] used is mentioned below.

$$ H\left({X}_k,{V}_j\right)=1-\tanh \kern0.24em \left(\frac{-{\left\Vert {X}_k-{V}_j\right\Vert}^2}{\sigma^2}\right) $$
(20)

Here the value of σ2 is we considered with the adjacent P neighbours variation of radius R form the centre pixel Xk.

$$ {\sigma}^2=\frac{\sum \limits_{k=1}^P\left\Vert {X}_k-\overline{X}\right\Vert }{P} $$
(21)

Where, the total image variation is denoted as σ2 and \( \overline{X}=\frac{\sum \limits_{K=1}^P{X}_k}{P} \) is the neighbouring pixels mean. The kFCM algorithm associates high dimensional attribute space from the input data space. In image segmentation it is established that the presentation of HTGkFCM algorithm is better than FCM algorithm that is specified with augmented robustness and efficiency. Unlike FCM, it can adaptively determine the no of clusters in the data under some criteria.

The mentioned flow chart (Fig. 2) explains the entire process flow of the proposed work. In this method the segmentation of DXRI using core attitude of FCM is concentrated. Background area and the main part from a DXRI are attitude by the Otsu thresholding. After the previous steps results, the FCM method is selected to cluster the Dental Structure domain and this will used to separate dental structure area. The results could be improved depending on the former clustering methods optimum result with robustness Semi-supervised Hyperbolic Tangent Gaussian Kernel Fuzzy Clustering algorithm (kFCM) is selected to elucidate less sensitive to noise. From the paradigm the FCM in a sensible processing manner computes the final segmentation results. The incorporation of the Fuzzy C-Means (FCM), Otsu method and the HTGkFCM processes are used to improve the limits of independent approaches.

Fig. 2
figure 2

Flow chart for semi-supervised Otsu based HTGkFCM

4 Results and discussion

The proposed technique HTGkFCM-Otsu will be executed in Matlab 2014a and the presentation will be calculated along with the Otsu and FCM methods. The proposed methods performance is compared with the eSFCM [29] that use various semi-supervised fuzzy clustering algorithm. Original set of hypothetical dental X-ray image dataset from IIT Roorkee hospital containing 152 images is used for our execution. The parameter used in the algorithm can be denoted as m = 2 and threshold (ε = o. oo2). The experimental product for the proposed dental image segmentation are illustrated in Figs. 3, 4, 5, 6.

Fig. 3
figure 3

Dental X-ray image’s experimental results. a Original image; b Image after the applying Otsu

Fig. 4
figure 4

Experimental results with three cluster on Otsu applied image. a FCM clustering results; b clustering by HTGkFCM-Otsu

Fig. 5
figure 5

Experimental results with five cluster on Otsu applied image. a results of clustering by FCM; b clustering by HTGkFCM-Otsu

Fig. 6
figure 6

Experimental results with seven cluster on Otsu applied image. a FCM clustering results; b clustering by HTGkFCM-Otsu

For the dental analysis radiographs, seven structures of the tooth were involved: caries, enamel, dentin, pulp, crown, cementum, and root canal. The dental substance is made of soft tissue (pulp) and the hard tissues (Enamel, dentin, and cementum). On the anatomic crownsouter surface, the enamel is formed. The main portion of the tooth is formed by Dentin in which root is the part delighted in cementum and crown is the part wrapped in enamel. Cementum covers the root and there are various junctions such as, dentine enamel junction, cement enamel junction and cement dentinal junction. In the centre of a tooth the dental pulp is the part which is made up of concavity, dental caries, also known as tooth decay. In the below Fig. 7, the segmentation results for the five images are given. Thus, the result for the dental image segmentation is illustrated based upon the seven tooth structures.

Fig. 7
figure 7

Experimental segmentation results for the proposed HTGkFCM-Otsu method with clusters c = 3, c = 5 and c = 7. (a)-(c) represents results for image 1after segmentation, (d)-(f) represents the segmentation results for image 2, (g)-(i) represents the segmentation results for image 3, (j)-(l) represents the segmentation results for image 4, (m)-(o) represents the segmentation results for image 5

While comparing to the existing methodologies, our proposed work have following advantages:

  1. a)

    The proposed work offerings an initial effort towards modelling the segmentation result of DXRI with semi supervised fuzzy clustering. With new hyperbolic tangent function in Eq. (20) combines neighbourhood information and the dental features of a pixel, an outcome of the semi-supervised fuzzy clustering model comprising membership matrix and cluster centres which are focused by a dental X-ray images structure. It carries more significance for practical dentistry towards receiving segmented images which are nearest to an optimum outcomes.

  2. b)

    Further information is presented at the membership matrix in the FCM algorithm through which combines threshold based segmentation. Compared to the traditional FCM, our HTGkFCM-otsu gives proposer methods for specifying the further information in addition tothat incorporate the objective function into the model.

  3. c)

    This study initially deliberates an outcomes of the problem with the optimization under the fuzzy concept. Distinct from traditional approaches with normal FCM, the current work distinguishes inaccessible difficulties and resolves that within an identical context. The effectiveness of the proposed work is to be ideally authenticated in which the quality of clustering with hyperbolic tangent function is better than the traditional FCM.

  4. d)

    This novel algorithm is to be prepared through theoretical studies. Most of algorithms and suggestions has to be explained, but such an evaluation are given below in order to show the proposed work performance.

The Cluster validity computation such as, Davies–Bouldin (DB), Segmentation Accuracy (SA), Simplified Silhouete Width Criterion (SSWC) and Mean Absolute Error (MAE). A novel algorithm named as kFCM-Otsu will possibly acquire higher accuracy and more reliable than the existing clustering approaches.

Segmentation Accuracy (SA): It is the correctly segmented pixels ratio to the overall no of pixels. Based on the SA value the performance of clustering becomes better.

Mean Absolute Error (MAE): The MAE is an amount measure of closeness to the segmented algorithm to the accurate segmentation, for an image segmentation, it given by,

$$ MAE=\frac{1}{n}\sum \limits_{i=1}^n\left|{f}_i-{y}_i\right|=\frac{1}{n}\sum \limits_{i=1}^n\left|{e}_i\right| $$
(22)

Where the result is represented as fi and yi is the correct value.

Davies-Bouldin: The Davies-Bouldin index [11] bases are the clusters separation measure and the cluster variation measure and are depends on similarity measure of clusters. The similarity mean among each cluster and its great equivalent one is calculated in Davies – Boludin index. For better cluster configuration, clusters have to be compress and isolated the lower Davies – Bouldin index.

$$ DB=\frac{1}{C}\sum \limits_{j=1}^C{D}_j $$
(23)

Where \( {D}_j=\underset{\begin{array}{l}j=1,\cdots, {n}_c,\\ {}i\ne j\end{array}}{\max}\left({D}_{ij}\right),i=1,\cdots, C \) and \( {D}_{ijj}=\frac{\left({\overline{d}}_j+{\overline{d}}_i\right)}{d_{i,j}} \).

In which, the mean distances of clusters i and j are signified as \( {\overline{d}}_i \) and \( {\overline{d}}_j \). These clusters distance is denoted as di, j.

$$ {\overline{d}}_j=\frac{1}{N}\sum \limits_{x_i\in {C}_j}\left\Vert {x}_i-{\overline{x}}_j\right\Vert; {d}_{i,j}=\left\Vert {\overline{x}}_i-{\overline{x}}_j\right\Vert $$
(24)

The DB criterions low value is optimum.

Simplified Silhouette Width Criterion (SSWC):The Silhouette Width Criterion (SWC) is for compactness and partition of clusters [17, 26], a different eminent index is depends on geometrical concerns. Then, an individual object the silhouette is termed as:

$$ {S}_{x\;k}=\frac{b_{p,k}-{a}_{p,k}}{\max \left\{{a}_{p,k},{b}_{p,k}\right\}} $$
(25)

This stops the SWC, well-defined as the mean of Sxk over j = 1, 2, ⋯, N as,

$$ SSWC=\left(\frac{1}{N}\sum \limits_{k=1}^N{S}_{xk}\right) $$
(26)

The variance of object k to its cluster p is defined as ap, k. To elect the trivial solution as the best one k = N (with the datasets each object forming its own cluster). The better value partition presents new effective algorithm and are estimated to be differentiated through higher values of SWC Using SSWC.

PBM: The hierarchy produce the best partitioning [22], by highest value of this index, called the PBM – index. PBM is depends on clusters distance and using the formula the distance amongst the cluster is calculated.

$$ PBM={\left(\frac{1}{K}\frac{E_1}{E_k}{D}_k\right)}^2 $$
(27)

Where, \( {E}_1=\sum \limits_{k=1}^N\left\Vert {x}_i-{\overline{x}}_k\right\Vert \), \( {E}_k=\sum \limits_{i=1}^c\sum \limits_{x_i\in C}\left\Vert {x}_i-{\overline{x}}_k\right\Vert \) and \( {D}_k=\max \sum \limits_{k=1}^N\left\Vert {x}_i-{\overline{x}}_k\right\Vert \).

Where, the sum of distances among the grand means of the data and the objects is calculated as E1 the maximum distance between group centroids is represented as DK and the sum of within-group distances is denoted by EK. PBM is maximized when the best partition should be indicated, which implies EK value is minimized while maximizing the DK.

The best values are calculated from the power indices. The experimental results for the five benchmark images from the dataset (Image 1–Image 5) consist of different values which are categorized in the Table 3. Compared to other algorithms, the proposed frameworks accuracy is improved than the eSFCM-Otsu. The table undoubtedly identify that HTGkFCM-Otsu attained additional number of validation indices (SSWC and PBM) where its product are improved compared with the existing algorithms. An effective tool for DXIS is collective framework HTGkFCM-Otsu with proper adaptation.

Table 3 The comparative results for the semi-supervised fuzzy c-means clustering

HTGkFCM-Otsu creates further details together from the optimum consequences of hyperbolic tangent function and FCM.

  1. 1.

    The proposed work has improved value compared with the fuzzy clustering and semi-supervised algorithms namely eSFCM and FCM respectively. Rather than arbitrarily establishment of further details in the eSFCM algorithm, HTGkFCM-Otsu acquires from the FCMs optimal results and by spatial restrictions its modified which have the following outcomes could congregate in order to getting more exact results of an issue.

  2. 2.

    HTGkFCM-Otsu is not intensifying the bunch of data. Here the parameters of the algorithm are similar to the Otsu and it not able to take maximum time for processing. Whatever this concept is a combination of more identical concepts, this is running efficiently for the set of images. So this is reliable that with the time for processing of proposed algorithm. This algorithm is much more efficient than FCM, but it is not too much success concept than FCM.

  3. 3.

    HTGkFCM-Otsu is reliable for implementing.

  4. 4.

    HTGkFCM-Otsu takes an important way to cooperate much more parameters for a problems related to medical diagnosis and which is important for the growth of health-care schemes in dental science.

4.1 Non-background dataset based experimental results

Table 3 shows the outcome of every algorithm through further values of parameters on real time data. An outcomes are assessed through the validity measurements in which the best values are given from the Table 3, the performances such as SA, PBM, SSWC and MAE are better than the existing algorithms. In terms of short performance, Table 4 presents the mean and the variance of every measures.

Table 4 Each criterions mean and the variance

Table 5 provides all algorithms performance comparison regarding quantitative measures.

Table 5 Quantitative measure based performance

Table 5 is taken from Table 4 through evaluating the optimal value and allocating the identical value in Table 3. Further values in the similar row are intended through isolating an equivalent values in Table 4 through the best value for seeing the presented optimum algorithm is reliable than other algorithm. From Table 5, we obviously identify that more no of validation indices (SSWC, PBM and DB) were attained by eSFCM-Otsu through which has the reliable algorithm. Further comparative methods namely (eSFCM-Otsu and FCM) is to be attained maximum results; this is unbalanced though analysing through further methods. It is evidently confirms that the supportive agenda HTGkFCM-Otsu using a suitable processing is a tool applicable for the segmentation of DXRI.

4.2 State of the art comparison

The proposed segmentation results are indicated in Fig. 8 compared with other methods on different dental images of the different patients. From the state of the art comparison, we can notice that the results related to the segmentation in the proposed HTGkFCM-Otsu has more performance than other techniques. From the Fig. 8(a) shows that Davies Bouldin (DB) has minimum value (0.625) than other techniques like FCM, Otsu, eSFCM and CANOM (Clustering Algorithm based on Neutrosophic Orthogonal Matrices) [3] techniques. Since the proposed work has better performance than other techniques. In case of Fig. 8 (b), Simplified Silhouette Width Criterion (SSWC) has maximum value (0.982) than other techniques and hence the proposed system has higher performance than other techniques. Finally, In case of Fig. 8 (c) processing time it consumes less time than other method and it processed at the 8.523 s and the other methods FCM, Otsu, eSFCM and CANOM processed at 12.883, 10.132, 13.190 and 13.771 respectively. Compared with the existing algorithm the results are found better because the table evidently identify that HTGkFCM-Otsu attained additional number of validation indices (SSWC and DB). While testing with other indices, yet they are volatile while testing with other indices.

Fig. 8
figure 8

Box plot comparison for (a) Davies & Bouldin, (b) SSWC and (c) Time

5 Conclusion

In this article, we condensed on the DXRI segmentation with the primary attitude being fuzzy clustering activity. This work’s contribution provide a novel semi-supervised collective structure that combine Ostu thresholding with the Hyperbolic Tangent Gaussian kernel Fuzzy C-Means algorithm method (HTGkFCM). In this FCM, the dental images main part is classified into Teeth and Dental areas. The attained result from the FCM are then rectified by means of HTGkFCM. It reveals that the semi-supervised fuzzy clustering algorithm was capable to establish segmentation products at final from the paradigm in a reasonable processing manner. The exploration of the further works of this research as follows (i) Gaussian kernel could not manipulate entire structure of the data, so upgrade the possible function to determine the cluster centre instead of the Gaussian kernel function. (ii) In simulation, it is detect that the data segmentation approach for enamel, dentin and pulp creates a logical additional training sample, but the new classes transmits, crown, cementum and root canal, seem somewhat dissimilar conversing with their comparative location, so the segmentation need to be improved, and (iii) concentrate on Computer-Automated Detection of Caries in Bitewing Radiography.