1 Introduction

Many mathematical tools have been proposed to overcome problems containing uncertainties in the real world. Fuzzy sets (Zadeh 1965) and soft sets (Molodtsov 1999) are among the known mathematical tools. In addition to these, intuitionistic fuzzy sets (Atanassov 1986) and interval-valued intuitionistic fuzzy sets (ivif-sets) (Atanassov 2020; Atanassov and Gargov 1989), being the generalisations of the concept of fuzzy sets, have been propounded. Afterwards, various hybrid versions of these concepts, such as fuzzy soft sets (Maji et al. 2001), fuzzy parameterized soft sets (Çağman et al. 2011a), fuzzy parameterized fuzzy soft sets (Çağman et al. 2010), intuitionistic fuzzy parameterized soft sets (Deli and Çağman 2015), interval-valued intuitionistic fuzzy parameterized soft sets (Deli and Karataş 2016), intuitionistic fuzzy parameterized intuitionistic fuzzy soft sets (Karaaslan 2016), and fuzzy parameterized intuitionistic fuzzy soft sets (Sulukan et al. 2019) have been introduced. So far, the researchers have conducted numerous theoretical and applied studies on these concepts in various fields, such as algebra (Çıtak and Çağman 2015; Senapati and Shum 2019; Sezgin 2016; Sezgin et al. 2019; Ullah et al. 2018), topology (Atmaca 2017; Aydın and Enginoğlu 2021b; Enginoğlu et al. 2015; Riaz and Hashmi 2017; Şenel 2016; Thomas and John 2016), analysis (Molodtsov 2004; Riaz et al. 2018; Şenel 2018), and decision making (Çağman and Enginoğlu 2010b; Çağman et al. 2011b; Garg and Arora 2020; Kumar and Garg 2018; Liu and Jiang 2020; Maji et al. 2002; Memiş and Enginoğlu 2019; Mishra and Rani 2018; Petchimuthu et al. 2020; Xue et al. 2021).

However, when a problem containing uncertainties incorporates a large number of data, the aforesaid set concepts display some time- and complexity-related disadvantages. To cope with these difficulties, Çağman and Enginoğlu (2010a) have defined the concept of soft matrices allowing data in such problems to be transferred to and processed in a computer environment and suggested the soft max-min method. Then, Çağman and Enginoğlu (2012) have presented the concept of fuzzy soft matrices and constructed a soft decision-making (SDM) method. Enginoğlu and Çağman (2020) have propounded the concept of fuzzy parameterized fuzzy soft matrices (fpfs-matrices). Moreover, they have proposed an SDM method called Prevalence Effect Method (PEM) and applied it to a performance-based value assignment (PVA) problem, so that they can order image-denoising filters in terms of noise-removal performance. Afterwards, Enginoğlu et al. (2019a) have offered a novel SDM method constructed with fpfs-matrices and PEM, and applied it to the problem of monolithic columns classification.

Lately, the concept of fpfs-matrices has stood out among others due to its modelling success in decision-making problems, where the alternatives and parameters have fuzzy membership degrees. Therefore, many SDM methods, constructed by its substructures, have been configured in (Aydın and Enginoğlu 2019, 2020; Enginoğlu and Memiş 2018b; Enginoğlu and Öngel 2020; Enginoğlu et al. 2021a, b) to operate them in fpfs-matrices space, faithfully to the original. Some of the configured methods have been applied to PVA problems, and successful results have been obtained (Aydın and Enginoğlu 2019, 2020; Enginoğlu and Öngel 2020). Besides, Enginoğlu and Memiş (2018a, 2018c) and Enginoğlu et al. (2018a, 2018b) have focussed on mathematical simplifications and improvements of some of the configured methods. Memiş et al. (2019) have developed a classification algorithm based on normalised Hamming pseudo-similarity of fpfs-matrices. Further, Memiş et al. (2021b) have proposed a classification algorithm based on the Euclidean pseudo-similarity of fpfs-matrices.

Afterwards, the concept of intuitionistic fuzzy parameterized intuitionistic fuzzy soft matrices (ifpifs-matrices) (Enginoğlu and Arslan 2020) has been introduced to model uncertainties in which the alternatives and parameters have intuitionistic fuzzy values. Furthermore, using this concept, a new SDM method has been proposed and applied to a hypothetical problem concerning the determination of eligible candidates in a recruitment scenario and a real-life problem of image processing. Arslan et al. (2021) have then generalised 24 SDM methods operating in fpfs-matrices space via this concept. Besides, they have suggested five test scenarios to compare the performances of the generalised SDM methods and applied the SDM methods successful in these test scenarios to a PVA problem. In addition, Memiş et al. (2021a) have offered a classifier based on the similarity of ifpifs-matrices and applied this classifier to machine learning.

Recently, to be able to model some problems mathematically in which parameters and alternatives contain serious uncertainties, Aydın and Enginoğlu (2021a) have defined the concept of interval-valued intuitionistic fuzzy parameterized interval-valued intuitionistic fuzzy soft sets (d-sets), which can be regarded as the general form of the concepts of interval-valued intuitionistic fuzzy parameterized soft sets (Deli and Karataş 2016) and interval-valued intuitionistic fuzzy soft sets (Jiang et al. 2010; Min 2008). They then have proposed an SDM method using d-sets and applied it to two decision-making problems concerning the eligibility of candidates for two vacant positions in an online job advertisement and PVA to the known filters used in image denoising. The applications have shown that d-sets can be successfully applied to problems containing further uncertainties. Thus, in decision-making problems where the parameters and alternatives contain multiple measurement results, the ambiguity as to which value to assign to a parameter or an alternative has been clarified. The primary motivation of the present study is to develop effective SDM methods by improving d-sets’ skills in modelling such problems. The second one is to propound a novel mathematical tool to enable data in similar problems, containing both a large number of data and multiple intuitionistic fuzzy measurement results, to be transferred to a computer environment. Thus, it will be possible to use the concept of d-sets effectively.

In the current study, we focus on the concept of ivif-sets, more meaningful and convenient than the others, to minimise data loss when modelling the problem of which value to assign to a parameter or an alternative with multiple fuzzy or intuitionistic fuzzy measurement results. For example, in Section 5, the results of Based on Pixel Density Filter (BPDF) (Erkan and Gökrem 2018) for 20 traditional test images at noise density \(10\%\) are as follows:

$$\begin{aligned} \begin{array}{lllll} \mu _1=0.9848, &{} \mu _2=0.9911, &{} \mu _3=0.9743, &{} \mu _4=0.9795, &{} \mu _5=0.9735, \\ \mu _6=0.9747, &{} \mu _7=0.9795, &{} \mu _8=0.9885, &{} \mu _9=0.9761, &{} \mu _{10}=0.9801, \\ \mu _{11}=0.9753, &{} \mu _{12}=0.9938, &{} \mu _{13}=0.9705, &{} \mu _{14}=0.9707, &{} \mu _{15}=0.9726, \\ \mu _{16}=0.9808, &{} \mu _{17}=0.9791, &{} \mu _{18}=0.9909, &{} \mu _{19}=0.9657, &{} \mu _{20}=0.9830 \end{array} \end{aligned}$$

We can regard these results as the multiple membership degrees of BPDF herein. Thus, we can obtain the multiple non-membership degrees of BPDF corresponding to these multiple membership degrees using \(\nu _i=1-\mu _i\), for \(i \in \{1,2,\dots ,20\}\). Namely,

$$\begin{aligned} \begin{array}{lllll} \nu _1= 0.0152, &{} \nu _2=0.0089, &{} \nu _3=0.0257, &{} \nu _4=0.0205, &{} \nu _5=0.0265, \\ \nu _6=0.0253, &{} \nu _7=0.0205, &{} \nu _8=0.0115, &{} \nu _9=0.0239, &{} \nu _{10}=0.0199,\\ \nu _{11}=0.0247, &{} \nu _{12}=0.0062, &{} \nu _{13}=0.0295, &{} \nu _{14}=0.0293, &{} \nu _{15}=0.0274,\\ \nu _{16}=0.0192, &{} \nu _{17}=0.0209, &{} \nu _{18}=0.0091, &{} \nu _{19}=0.0343, &{} \nu _{20}=0.0170 \end{array} \end{aligned}$$

We can calculate the membership and non-membership degrees of BPDF in three different ways by availing of the aforesaid values as follows:

  1. 1.

    Using \(\mu \)(BPDF)\( = \frac{1}{20} \sum _{i=1}^{20} \mu _i\), we obtain the degree of BPDF’s membership to a fuzzy set as \(\mu \)(BPDF)\(=0.9792\).

  2. 2.

    By utilising \(\mu \)(BPDF)\(=\min \limits _{i \in I_{20}} {\mu _i}\) and \(\nu \)(BPDF)\(=1-\max \limits _{i \in I_{20}} {\nu _i}\), we obtain the degrees of BPDF’s membership and non-membership to an intuitionistic fuzzy set as \(\mu \)(BPDF)\(=0.9657\) and \(\nu \)(BPDF)\(=0.0062\), respectively.

  3. 3.

    By employing \(\mu \)(BPDF)\(=\left[ \frac{\min \limits _{i \in I_{20}}{\mu _{i}}}{\max \limits _{i \in I_{20}}{\mu _{i}} + \max \limits _{i \in I_{20}}{\nu _{i}}} , \frac{\max \limits _{i \in I_{20}}{\mu _{i}}}{\max \limits _{i \in I_{20}}{\mu _{i}} + \max \limits _{i \in I_{20}}{\nu _{i}}} \right] \) and \(\nu \)(BPDF)\(=\left[ \frac{\min \limits _{i \in I_{20}}{\nu _{i}}}{\max \limits _{i \in I_{20}}{\mu _{i}} + \max \limits _{i \in I_{20}}{\nu _{i}}} , \frac{\max \limits _{i \in I_{20}}{\nu _{i}}}{\max \limits _{i \in I_{20}}{\mu _{i}} + \max \limits _{i \in I_{20}}{\nu _{i}}} \right] \), we obtain the degrees of BPDF’s membership and non-membership to an ivif-set as \(\mu \)(BPDF)\(=[0.9392,0.9666]\) and \(\nu \)(BPDF)\(=[0.0060,0.0334]\), respectively.

The first case shows that BPDF’s noise-removal performance at noise density \(10\%\) accounts for approximately \(98\%\). The second signifies that BPDF exhibits a success rate of around \(97\%\) and a failure rate of \(1\%\) in noise removal. The last one indicates that the noise-removal success of BPDF ranges from \(94\%\) to \(97\%\) and its failure from 1 to \(3\%\). These comments manifest that membership and non-membership degrees assigned to an alternative in ivif-sets offer more information than fuzzy sets and intuitionistic fuzzy sets do. Hence, we can summarise the significant advantages and contributions of the present study as follows:

  • The concept of interval-valued intuitionistic fuzzy parameterized interval-valued intuitionistic fuzzy soft matrices (d-matrices) has an important advantage to prevent errors arising from manual calculations in SDM methods constructed by d-sets. This concept makes it possible to obtain fast and reliable results.

  • The concept of d-matrices allows to process a large number of data and multiple measurement results by transferring them to a computer environment.

  • The concept of d-matrices utilises ivif-values containing more information compared to fuzzy or intuitionistic fuzzy values to determine membership and non-membership degrees of parameters and alternatives.

  • The pre-processing step of the configured method presents an approach related to the conversion of multiple intuitionistic fuzzy measurement results to ivif-values.

On the other hand, the running time of the configured method can be slightly longer than those of the others. This relatively minor drawback results from computations while converting multiple intuitionistic fuzzy measurement results to ivif-values. For instance, for d-matrix \([b_{ij}]\) and ifpifs-matrix \([c_{ij}]\) in Sects.  5 and 6 , the data concerning the average running time of the methods (in second), using MATLAB R2021a and a laptop with 2.5 GHz i5-2450M CPU and 8 GB RAM, in 1000 runs are as follows:

The configured method: 0.0063, iMBR01: 0.0011, iMRB02\((I_9)\): 0.0009, iCCE10: 0.0002, iCCE11: 0.0004, and iPEM: 0.0028

Section 2 of the present study provides some of the basic definitions to be employed in the paper’s next sections. Section 3 defines the concept of d-matrices and investigates some of its basic properties. Section 4 configures a state-of-the-art SDM method constructed with d-sets to operate it in d-matrices space. Section 5 applies it to a real-life problem concerning PVA to the known image-denoising filters using the Structural Similarity (SSIM) results of these filters for the images provided in two different databases. Furthermore, the section comments on the ranking orders of the filters. Section 6 provides a comparative analysis of the ranking performances of the configured method and those of the five methods by applying five state-of-the-art SDM methods constructed with ifpifs-matrices to the same problem. Finally, d-matrices are discussed for further research. This study is a part of the first author’s PhD dissertation (Aydın 2020).

2 Preliminaries

This section first presents several the known definitions and propositions. Throughout this paper, let Int([0, 1]) be the set of all closed classical subintervals of [0, 1].

Definition 1

Let \(\gamma _1,\gamma _2 \in Int([0,1])\). For \(\gamma _1:=[\gamma ^-_1,\gamma ^+_1]\) and \(\gamma _2:=[\gamma ^-_2,\gamma ^+_2]\),

i.:

if \(\gamma ^-_2 \le \gamma ^-_1\) and \(\gamma ^+_1 \le \gamma ^+_2\), then \(\gamma _1\) is called a classical subinterval of \(\gamma _2\) and is denoted by \(\gamma _1 \subseteq \gamma _2\).

ii.:

if \(\gamma ^-_1 \le \gamma ^-_2\) and \(\gamma ^+_1 \le \gamma ^+_2\), then \(\gamma _1\) is called a subinterval of \(\gamma _2\) and is denoted by \(\gamma _1 {{\tilde{\subseteq }}} \gamma _2\).

iii.:

if \(\gamma ^-_1 = \gamma ^-_2\) and \(\gamma ^+_1 = \gamma ^+_2\), then \(\gamma _1\) and \(\gamma _2\) are called equal intervals and is denoted by \(\gamma _1 = \gamma _2\).

Proposition 1

Let \(\gamma _1,\gamma _2 \in Int([0,1])\). Then, \(\gamma _1 {{\tilde{\le }}} \gamma _2\Leftrightarrow \gamma _1 {{\tilde{\subseteq }}} \gamma _2\). Here, “\({{\tilde{\le }}}\)” is a partially ordered relation over Int([0, 1]).

In the present paper, the smallest upper bound and greatest lower bound of the elements of the set Int([0, 1]) are obtained from the partially ordered relation “\({{\tilde{\le }}}\)”.

Definition 2

Let \(\gamma ,\gamma _1,\gamma _2 \in Int({\mathbb {R}})\) and \(c \in \mathbb {R}^{+}\) such that \(\gamma :=[\gamma ^-,\gamma ^+]\), \(\gamma _1:=[\gamma ^-_1,\gamma ^+_1]\), and \(\gamma _2:=[\gamma ^-_2,\gamma ^+_2]\). Then,

i.:

\(\gamma _1 + \gamma _2 := [\gamma ^-_1 + \gamma ^-_2,\gamma ^+_1 + \gamma ^+_2]\)

ii.:

\(\gamma _1 - \gamma _2 := [\gamma ^-_1 - \gamma ^+_2,\gamma ^+_1 - \gamma ^-_2]\)

iii.:

\(\gamma _1 \cdot \gamma _2 := [\min \{\gamma ^-_1\gamma ^-_2,\gamma ^-_1\gamma ^+_2,\gamma ^+_1\gamma ^-_2,\gamma ^+_1\gamma ^+_2\},\max \{\gamma ^-_1\gamma ^-_2,\gamma ^-_1\gamma ^+_2,\gamma ^+_1\gamma ^-_2,\gamma ^+_1\gamma ^+_2\}]\)

iv.:

\(c \cdot \gamma := [c \cdot \gamma ^-,c \cdot \gamma ^+]\)

Proposition 2

Let \(\gamma _1,\gamma _2 \in Int([0,1])\) such that \(\gamma _1:=[\gamma ^-_1,\gamma ^+_1]\) and \(\gamma _2:=[\gamma ^-_2,\gamma ^+_2]\). Then,

  1. i.

    \(\sup \{\gamma _1,\gamma _2\}=[\max \{\gamma ^-_1,\gamma ^-_2\},\max \{\gamma ^+_1,\gamma ^+_2\}]\)

  2. ii.

    \(\inf \{\gamma _1,\gamma _2\}=[\min \{\gamma ^-_1,\gamma ^-_2\},\min \{\gamma ^+_1,\gamma ^+_2\}]\)

Second, this section presents some of the basic definitions to be used in the paper’s next sections.

Definition 3

(Atanassov and Gargov 1989) Let E be a universal set and \(\kappa \) be a function from E to \(Int([0,1]) \times Int([0,1])\). Then, the set \(\left\{ (x,\kappa (x)) : x \in E \right\} \), being the graphic of \(\kappa \), is called an interval-valued intuitionistic fuzzy set (ivif-set) over E.

Here, for all \(x \in E\), \(\kappa (x):=(\alpha (x),\beta (x))\), \(\alpha (x):=[\alpha ^-(x),\alpha ^+(x)]\), and \(\beta (x):=[\beta ^-(x),\beta ^+(x)]\) such that \(\alpha ^+(x)+\beta ^+(x) \le 1\). Moreover, \(\alpha \) and \(\beta \) are called membership function and non-membership function in an ivif-set, respectively.

From now on, the set of all the ivif-sets over E is denoted by IVIF(E). In IVIF(E), since the graph\((\kappa )\) and \(\kappa \) generate each other uniquely, the notations are interchangeable. Therefore, as long as it causes no confusion, we denote an ivif-set graph\((\kappa )\) by \(\kappa \). Moreover, we use the notation \({{^{\alpha (x)}_{\beta (x)}}}x\) instead of \((x,\alpha (x),\beta (x))\), for brevity. Thus, we represent an ivif-set over E with \(\kappa :=\left\{ {{^{\alpha (x)}_{\beta (x)}}}x : x \in E \right\} \) .

Note 1

Since \([k,k]:=k\), we use \({^{k}_{t}}x\) instead of \({{^{[k,k]}_{[t,t]}}}x\), for all \(k,t \in [0,1]\). Moreover, we do not display the elements \(^{0}_{1}x\) in an ivif-set.

Definition 4

(Aydın and Enginoğlu 2021a) Let U be a universal set, E be a parameter set, \(\kappa \in IVIF(E)\), and f be a function from \(\kappa \) to IVIF(U). Then, the set \(\left\{ \left( {{^{\alpha (x)}_{\beta (x)}}}x,f\left( {{^{\alpha (x)}_{\beta (x)}}}x\right) \right) : x\in E \right\} \), being the graphic of f, is called an interval-valued intuitionistic fuzzy parameterized interval-valued intuitionistic fuzzy soft set (d-set) parameterized via E over U (or briefly over U).

Note 2

We do not display the elements \(\left( ^{0}_{1}x,0_{U}\right) \) in a d-set. Here, \(0_{U}\) is the empty ivif-set over U.

Hereinafter, the set of all the d-sets over U is denoted by \(D_E(U)\). In \(D_E(U)\), since the graph(f) and f generate each other uniquely, the notations are interchangeable. Therefore, as long as it causes no confusion, we denote a d-set graph(f) by f.

Example 1

Let \(E=\{x_1,x_2,x_3,x_4\}\) be a parameter set and \(U=\{u_1,u_2,u_3,u_4,u_5\}\) be a universal set. Then,

$$\begin{aligned} \begin{array}{rl} f=&{}\left\{ \left( {^{[0.1,0.4]}_{[0.4,0.5]}}x_1,\left\{ {^{[0.4,0.6]}_{[0.2,0.3]}}u_1,{^{[0.7,0.8]}_{[0,0.1]}}u_2,{^{[0.1,0.4]}_{[0,0.2]}}u_4\right\} \right) , \left( {^{0}_{1}}x_2,\left\{ {^{[0,0.5]}_{[0.1,0.2]}}u_3,{^{[0.3,0.5]}_{[0.2,0.3]}}u_5\right\} \right) ,\right. \\ &{}\,\,\,\,\,\,\, \left. \left( {^{0}_{1}}x_3,1_{U}\right) ,\left( {^{[0.2,0.5]}_{[0.1,0.2]}}x_4,\left\{ {^{[0.3,0.4]}_{[0.5,0.6]}}u_2,{^{[0,0.2]}_{[0.5,0.6]}}u_4,{^{[0.3,0.7]}_{[0.1,0.2]}}u_5\right\} \right) \right\} \end{array} \end{aligned}$$

is a d-set over U. Here, \(1_{U}:=\left\{ {^{1}_{0}}u : u\in U\right\} \).

3 Interval-valued intuitionistic fuzzy parameterized interval-valued intuitionistic fuzzy soft matrices

This section first defines the concept of d-matrices and introduces some of its basic properties. The primary purpose of the present section is to enable a large number of data containing multiple measurement results to be transferred to a computer environment with the help of this concept. The second one is to develop effective SDM methods by improving d-sets’ skills in modelling such cases. To do so, this section focuses on making a theoretical contribution to the concept of soft matrices and defining product operations over d-matrices to use in SDM methods based on group decision making for the subsequent studies. From now on, let E be a parameter set and U be a universal set.

Definition 5

Let \(f \in D_E(U)\). Then, \([a_{ij}]\) is called the d-matrix of f and is defined by

$$\begin{aligned} {[}a_{ij}]=\left[ \begin{array}{cccccc} a_{01} &{} a_{02} &{} a_{03} &{} \dots &{} a_{0n} &{} \dots \\ a_{11} &{} a_{12} &{} a_{13} &{} \dots &{} a_{1n} &{} \dots \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots &{} \vdots \\ a_{m1} &{} a_{m2} &{} a_{m3} &{} \dots &{} a_{mn} &{} \dots \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots &{} \ddots \\ \end{array}\right] \end{aligned}$$

such that for \(i \in \{0,1,2, \cdots \}\) and \(j \in \{1,2,\cdots \}\),

$$\begin{aligned} a_{ij}:=\left\{ \begin{array}{ll} {^{\alpha (x_j)}_{\beta (x_j)}}, &{} i=0 \\ f\left( {^{\alpha (x_j)}_{\beta (x_j)}}x_j\right) (u_i), &{} i\ne 0 \end{array} \right. \end{aligned}$$

Moreover, if \(|U|=m-1\) and \(|E|=n\), then \([a_{ij}]\) is an \(m \times n\) d-matrix. We represent the entry of a d-matrix \([a_{ij}]\) with \(a_{ij}:={^{\alpha _{ij}}_{\beta _{ij}}}\). It must be noted that for all i and j, \(\alpha _{ij}:=[\alpha ^-_{ij},\alpha ^+_{ij}]\) and \(\beta _{ij}:=[\beta ^-_{ij},\beta ^+_{ij}]\) such that \(\alpha ^+_{ij}+\beta ^+_{ij} \le 1\). In this paper, to avoid any confusion, as needed, the membership and non-membership degrees of \(a_{ij}\), i.e. \(\alpha _{ij}\) and \(\beta _{ij}\), will also be represented by \(\alpha ^a_{ij}\) and \(\beta ^a_{ij}\), respectively. Besides, the set of all the d-matrices parameterized via E over U is denoted by \(D_E[U]\) and \([a_{ij}], [b_{ij}], [c_{ij}] \in D_E[U]\).

The entries of a d-matrix \([a_{ij}]_{m \times n}\) consist of ivif-values. The entries of row with zero indexed of its contain membership and non-membership degrees of each parameter. For example, the entry \(a_{01}\) indicates the membership and non-membership degrees of the first parameter. Moreover, the entries of the other rows of its involve the membership and non-membership degrees of an alternative corresponding to each parameter. For instance, the entry \(a_{32}\) signifies the membership and non-membership degrees of the third alternative corresponding to the second parameter.

Example 2

The d-matrix of f provided in Example 1 is as follows:

$$\begin{aligned} {[}a_{ij}]=\left[ \begin{array}{cccc} ^{[0.1,0.4]}_{[0.4,0.5]} \,\,\,\, &{} ^{0}_{1} \,\,\,\, &{} ^{0}_{1} \,\,\,\, &{} ^{[0.2,0.5]}_{[0.1,0.2]} \\ ^{[0.4,0.6]}_{[0.2,0.3]} \,\,\,\, &{} ^{0}_{1} \,\,\,\, &{} ^{1}_{0} \,\,\,\, &{} ^{0}_{1} \\ ^{[0.7,0.8]}_{[0,0.1]} \,\,\,\, &{} ^{0}_{1} \,\,\,\, &{} ^{1}_{0} \,\,\,\, &{} ^{[0.3,0.4]}_{[0.5,0.6]} \\ ^{0}_{1} \,\,\,\, &{} ^{[0,0.5]}_{[0.1,0.2]} \,\,\,\, &{} ^{1}_{0} \,\,\,\, &{} ^ {0}_{1} \\ ^{[0.1,0.4]}_{[0,0.2]} \,\,\,\, &{} ^{0}_{1} \,\,\,\, &{} ^{1}_{0} \,\,\,\, &{} ^{[0,0.2]}_{[0.5,0.6]} \\ ^{0}_{1} \,\,\,\, &{} ^{[0.3,0.5]}_{[0.2,0.3]} \,\,\,\, &{} ^{1}_{0} \,\,\,\, &{} ^{[0.3,0.7]}_{[0.1,0.2]} \\ \end{array}\right] \end{aligned}$$

Definition 6

Let \([a_{ij}] \in D_E[U]\). For all i and j, and for \(\lambda ,\varepsilon \in Int([0,1])\), if \(\alpha _{ij}=\lambda \) and \(\beta _{ij}=\varepsilon \), then \([a_{ij}]\) is called \((\lambda ,\varepsilon )\)-d-matrix and is denoted by \([^{\lambda }_{\varepsilon }]\). Here, \(\left[ ^{0}_{1}\right] \) is called empty d-matrix and \(\left[ ^{1}_{0}\right] \) is called universal d-matrix.

Definition 7

Let \([a_{ij}],[b_{ij}],[c_{ij}] \in D_E[U]\), \(I_{E}:=\{j : x_j \in E\}\), and \(R\subseteq I_{E}\). If

$$\begin{aligned} \alpha _{ij}^{c}=\left\{ \begin{array}{rl}\alpha _{ij}^{a} ,&{} j\in R \\ \alpha _{ij}^{b},&{} j\in I_{E}\setminus R \end{array}\right. \quad \text {and } \quad \beta _{ij}^{c}=\left\{ \begin{array}{rl}\beta _{ij}^{a} ,&{} j\in R \\ \beta _{ij}^{b},&{} j\in I_{E}\setminus R \end{array}\right. \end{aligned}$$

then \([c_{ij}]\) is called Rb-restriction of \([a_{ij}]\) and is denoted by \(\left[ (a_{Rb})_{ij}\right] \).

Briefly, if \([b_{ij}]=\left[ ^{0}_{1}\right] \), then \([{(a_{R})}_{ij}]\) can be used instead of \(\left[ {\left( a_{R^0_1}\right) }_{ij}\right] \) and called R-restriction of \([a_{ij}]\). It is clear that

$$\begin{aligned} {(a_{R})}_{ij}=\left\{ \begin{array}{cc}{^{\alpha _{ij}}_{\beta _{ij}}} ,&{} j\in R \\ {^{0}_{1}},&{} j\in I_{E}\setminus R \end{array}\right. \end{aligned}$$

Example 3

For \(R=\{1,3,4\}\) and \(S=\{1,3\}\), \({R^1_0}\)-restriction and S-restriction of \([a_{ij}]\) provided in Example 2 are as follows:

$$\begin{aligned} \left[ \left( {a_{R^1_0}}\right) _{ij}\right] =\left[ \begin{array}{cccc} ^{[0.1,0.4]}_{[0.4,0.5]} \,\,\,\,&{} ^{1}_{0} \,\,\,\,&{} ^{0}_{1} \,\,\,\,&{} ^{[0.2,0.5]}_{[0.1,0.2]} \\ ^{[0.4,0.6]}_{[0.2,0.3]} \,\,\,\,&{} ^{1}_{0} \,\,\,\,&{} ^{1}_{0} \,\,\,\,&{} ^{0}_{1} \\ ^{[0.7,0.8]}_{[0,0.1]} \,\,\,\,&{} ^{1}_{0} \,\,\,\,&{} ^{1}_{0} \,\,\,\,&{} ^{[0.3,0.4]}_{[0.5,0.6]} \\ ^{0}_{1} \,\,\,\,&{} ^{1}_{0} \,\,\,\,&{} ^{1}_{0} \,\,\,\,&{} ^ {0}_{1} \\ ^{[0.1,0.4]}_{[0,0.2]} \,\,\,\,&{} ^{1}_{0} \,\,\,\,&{} ^{1}_{0} \,\,\,\,&{} ^{[0,0.2]}_{[0.5,0.6]} \\ ^{0}_{1} \,\,\,\,&{} ^{1}_{0} \,\,\,\,&{} ^{1}_{0} \,\,\,\,&{} ^{[0.3,0.7]}_{[0.1,0.2]} \\ \end{array}\right] \quad \text {and } \quad \left[ ({a_{S}})_{_{ij}}\right] =\left[ \begin{array}{cccc} ^{[0.1,0.4]}_{[0.4,0.5]} \,\,\,\,&{} ^{0}_{1} \,\,\,\,&{} ^{0}_{1} \,\,\,\,&{} ^{0}_{1} \\ ^{[0.4,0.6]}_{[0.2,0.3]} \,\,\,\,&{} ^{0}_{1} \,\,\,\,&{} ^{1}_{0} \,\,\,\,&{} ^{0}_{1} \\ ^{[0.7,0.8]}_{[0,0.1]} \,\,\,\,&{} ^{0}_{1} \,\,\,\,&{} ^{1}_{0} \,\,\,\,&{} ^{0}_{1} \\ ^{0}_{1} \,\,\,\,&{} ^{0}_{1} \,\,\,\,&{} ^{1}_{0} \,\,\,\,&{} ^{0}_{1} \\ ^{[0.1,0.4]}_{[0,0.2]} \,\,\,\,&{} ^{0}_{1} \,\,\,\,&{} ^{1}_{0} \,\,\,\,&{} ^{0}_{1} \\ ^{0}_{1} \,\,\,\,&{} ^{0}_{1} \,\,\,\,&{} ^{1}_{0} \,\,\,\,&{} ^{0}_{1} \\ \end{array}\right] \end{aligned}$$

Definition 8

Let \([a_{ij}],[b_{ij}] \in D_E[U]\). For all i and j, if \(\alpha _{ij}^{a} {\tilde{\le }} \alpha _{ij}^{b}\) and \(\beta _{ij}^{b} {\tilde{\le }} \beta _{ij}^{a}\), then \([a_{ij}]\) is called a submatrix of \([b_{ij}]\) and is denoted by \([a_{ij}] {\tilde{\subseteq }} [b_{ij}]\).

Definition 9

Let \([a_{ij}],[b_{ij}]\in D_E[U]\). For all i and j, if \(\alpha _{ij}^{a}= \alpha _{ij}^{b}\) and \(\beta _{ij}^{a}= \beta _{ij}^{b}\), then \([a_{ij}]\) and \([b_{ij}]\) are called equal d-matrices and is denoted by \([a_{ij}]=[b_{ij}]\).

Proposition 3

Let \([a_{ij}],[b_{ij}],[c_{ij}] \in D_E[U]\). Then,

  1. i.

    \([a_{ij}] {\tilde{\subseteq }} \left[ ^{1}_{0}\right] \)

  2. ii.

    \(\left[ ^{0}_{1}\right] {\tilde{\subseteq }} [a_{ij}]\)

  3. iii.

    \([a_{ij}]{\tilde{\subseteq }} [a_{ij}]\)

  4. iv.

    \(([a_{ij}]=[b_{ij}] \wedge [b_{ij}] = [c_{ij}]) \Rightarrow [a_{ij}] = [c_{ij}]\)

  5. v.

    \(([a_{ij}] {\tilde{\subseteq }} [b_{ij}] \wedge [b_{ij}] {\tilde{\subseteq }} [a_{ij}]) \Leftrightarrow [a_{ij}] = [b_{ij}]\)

  6. vi.

    \(([a_{ij}] {\tilde{\subseteq }} [b_{ij}] \wedge [b_{ij}] {\tilde{\subseteq }} [c_{ij}]) \Rightarrow [a_{ij}] {\tilde{\subseteq }} [c_{ij}]\)

Definition 10

Let \([a_{ij}],[b_{ij}] \in D_E[U]\). If \([a_{ij}] {\tilde{\subseteq }} [b_{ij}]\) and \([a_{ij}] \ne [b_{ij}]\), then \([a_{ij}]\) is called a proper submatrix of \([b_{ij}]\) and is denoted by \([a_{ij}] {\tilde{\subsetneq }} [b_{ij}]\).

Definition 11

Let \([a_{ij}],[b_{ij}],[c_{ij}] \in D_E[U]\). For all i and j, if \(\alpha _{ij}^{c}=\sup \{\alpha _{ij}^{a}, \alpha _{ij}^{b}\}\) and \(\beta _{ij}^{c}=\inf \{\beta _{ij}^{a}, \beta _{ij}^{b}\}\), then \([c_{ij}]\) is called union of \([a_{ij}]\) and \([b_{ij}]\) and is denoted by \([a_{ij}] {\tilde{\cup }} [b_{ij}]\).

Definition 12

Let \([a_{ij}],[b_{ij}],[c_{ij}] \in D_E[U]\). For all i and j, if \(\alpha _{ij}^{c}=\inf \{\alpha _{ij}^{a}, \alpha _{ij}^{b}\}\) and \(\beta _{ij}^{c}=\sup \{\beta _{ij}^{a}, \beta _{ij}^{b}\}\), then \([c_{ij}]\) is called intersection of \([a_{ij}]\) and \([b_{ij}]\) and is denoted by \([a_{ij}] {\tilde{\cap }} [b_{ij}]\).

Proposition 4

Let \([a_{ij}],[b_{ij}],[c_{ij}] \in D_E[U]\). Then,

  1. i.

    \([a_{ij}] {\tilde{\cup }} [a_{ij}] = [a_{ij}] \) and \( [a_{ij}] {\tilde{\cap }} [a_{ij}] = [a_{ij}]\)

  2. ii.

    \([a_{ij}] {\tilde{\cup }} \left[ ^{0}_{1}\right] = [a_{ij}]\) and \( [a_{ij}] {\tilde{\cap }} \left[ ^{1}_{0}\right] = [a_{ij}]\)

  3. iii.

    \([a_{ij}] {\tilde{\cup }} \left[ ^{1}_{0}\right] = \left[ ^{1}_{0}\right] \) and \( [a_{ij}] {\tilde{\cap }} \left[ ^{0}_{1}\right] = \left[ ^{0}_{1}\right] \)

  4. iv.

    \([a_{ij}] {\tilde{\cup }} [b_{ij}]= [b_{ij}] {\tilde{\cup }} [a_{ij}] \) and \([a_{ij}] {\tilde{\cap }} [b_{ij}]= [b_{ij}] {\tilde{\cap }} [a_{ij}]\)

  5. v.

    \(([a_{ij}] {\tilde{\cup }} [b_{ij}]){\tilde{\cup }} [c_{ij}]= [a_{ij}] {\tilde{\cup }} ([b_{ij}] {\tilde{\cup }} [c_{ij}])\) and \(([a_{ij}] {\tilde{\cap }} [b_{ij}]) {\tilde{\cap }} [c_{ij}]= [a_{ij}] {\tilde{\cap }} ([b_{ij}] {\tilde{\cap }} [c_{ij}])\)

  6. vi.

    \([a_{ij}] {\tilde{\cup }} ([b_{ij}] {\tilde{\cap }} [c_{ij}])=([a_{ij}] {\tilde{\cup }} [b_{ij}]) {\tilde{\cap }} ([a_{ij}] {\tilde{\cup }} [c_{ij}])\)

    \([a_{ij}] {\tilde{\cap }} ([b_{ij}] {\tilde{\cup }} [c_{ij}])= ([a_{ij}] {\tilde{\cap }} [b_{ij}]) {\tilde{\cup }} ([a_{ij}] {\tilde{\cap }} [c_{ij}])\)

  7. vii.

    \([a_{ij}] {\tilde{\subseteq }} [b_{ij}] \Rightarrow [a_{ij}] {\tilde{\cup }} [b_{ij}]=[b_{ij}]\) and \([a_{ij}] {\tilde{\subseteq }} [b_{ij}] \Rightarrow [a_{ij}] {\tilde{\cap }} [b_{ij}]=[a_{ij}]\)

Proof

vi. :

Let \([a_{ij}],[b_{ij}],[c_{ij}] \in D_E[U]\). Then,

$$\begin{aligned} \begin{array}{lll} {[a_{ij}] {\tilde{\cup }} ([b_{ij}]{\tilde{\cap }} [c_{ij}])}&{}=&{} [a_{ij}] {\tilde{\cup }} \left[ ^{\inf \left\{ \alpha _{ij}^{b},\alpha _{ij}^{c}\right\} }_{\sup \left\{ \beta _{ij}^{b},\beta _{ij}^{c}\right\} }\right] \\ &{}=&{} \left[ ^{\sup \left\{ \alpha _{ij}^{a},\inf \left\{ \alpha _{ij}^{b},\alpha _{ij}^{c}\right\} \right\} }_{\inf \left\{ \beta _{ij}^{a},\sup \left\{ \beta _{ij}^{b}, \beta _{ij}^{c}\right\} \right\} }\right] \\ &{}=&{} \left[ ^{\inf \left\{ \sup \left\{ \alpha _{ij}^{a},\alpha _{ij}^{b}\right\} ,\sup \left\{ \alpha _{ij}^{a}, \alpha _{ij}^{c}\right\} \right\} }_{\sup \left\{ \inf \left\{ \beta _{ij}^{a},\beta _{ij}^{b}\right\} ,\inf \left\{ \beta _{ij}^{a},\beta _{ij}^{c}\right\} \right\} }\right] \\ &{}=&{} \left[ ^{\sup \left\{ \alpha _{ij}^{a},\alpha _{ij}^{b}\right\} }_{\inf \left\{ \beta _{ij}^{a},\beta _{ij}^{b}\right\} }\right] {\tilde{\cap }} \left[ ^{\sup \left\{ \alpha _{ij}^{a},\alpha _{ij}^{c}\right\} }_{\inf \left\{ \beta _{ij}^{a},\beta _{ij}^{c}\right\} }\right] \\ &{}=&{} ([a_{ij}] {\tilde{\cup }} [b_{ij}]) {\tilde{\cap }} ([a_{ij}] {\tilde{\cup }} [c_{ij}]) \\ \end{array} \end{aligned}$$

\(\square \)

Example 4

Let \(E=\{x_1,x_2,x_3\}\) and \(U=\{u_1,u_2\}\). Assume that two d-matrices \([a_{ij}]\) and \([b_{ij}]\) are as follows:

$$\begin{aligned} {[}a_{ij}]=\left[ \begin{array}{ccc} ^{[0.2,0.4]}_{[0,0.6]} \,\,&{} ^{0.3}_{0.4} \,\,&{} ^{[0.3,0.4]}_{[0.1,0.2]} \\ ^{0}_{1} \,\,&{} ^{[0,0.3]}_{[0.4,0.6]} \,\,&{} ^{0.5}_{[0,0.4]} \\ ^{[0.5,0.7]}_{[0,0.3]} \,\,&{} ^{0.2}_{0.7} \,\,&{} ^{[0.5,0.6]}_{[0.1,0.3]} \\ \end{array}\right] \quad \text {and} \quad [b_{ij}]=\left[ \begin{array}{ccc} ^{[0.1,0.3]}_{[0.1,0.2]} \,\,&{} ^{[0.2,0.4]}_{[0.3,0.5]} \,\,&{} ^{[0.2,0.8]}_{[0,0.1]} \\ ^{[0.3,0.5]}_{[0.1,0.2]} \,\,&{} ^{[0.1,0.3]}_{[0.1,0.2]} \,\,&{} ^{0.6}_{0.1} \\ ^{[0.4,0.8]}_{[0.1,0.2]} \,\,&{} ^{0}_{1} \,\,&{} ^{[0,0.1]}_{[0,0.4]} \\ \end{array}\right] \end{aligned}$$

Then,

$$\begin{aligned} {[}a_{ij}] {\tilde{\cup }} [b_{ij}]=\left[ \begin{array}{ccc} ^{[0.2,0.4]}_{[0,0.2]} \,\,&{} ^{[0.3,0.4]}_{[0.3,0.4]} \,\,&{} ^{[0.3,0.8]}_{[0,0.1]} \\ ^{[0.3,0.5]}_{[0.1,0.2]} \,\,&{} ^{[0.1,0.3]}_{[0.1,0.2]} \,\,&{} ^{0.6}_{[0,0.1]} \\ ^{[0.5,0.8]}_{[0,0.2]} \,\,&{} ^{0.2}_{0.7} \,\,&{} ^{[0.5,0.6]}_{[0,0.3]} \\ \end{array}\right] \quad \text {and} \quad [a_{ij}] {\tilde{\cap }} [b_{ij}]=\left[ \begin{array}{ccc} ^{[0.1,0.3]}_{[0.1,0.6]} \,\,&{} ^{[0.2,0.3]}_{[0.4,0.5]} \,\,&{} ^{[0.2,0.4]}_{[0.1,0.2]} \\ ^{0}_{1} \,\,&{} ^{[0,0.3]}_{[0.4,0.6]} \,\,&{} ^{0.5}_{[0.1,0.4]} \\ ^{[0.4,0.7]}_{[0.1,0.3]} \,\,&{} ^{0}_{1} \,\,&{} ^{[0,0.1]}_{[0.1,0.4]} \\ \end{array}\right] \end{aligned}$$

Definition 13

Let \([a_{ij}],[b_{ij}],[c_{ij}] \in D_E[U]\). For all i and j, if \(\alpha _{ij}^{c}=\inf \{\alpha _{ij}^{a},\beta _{ij}^{b}\}\) and \(\beta _{ij}^{c}=\sup \{\beta _{ij}^{a},\alpha _{ij}^{b}\}\), then \([c_{ij}]\) is called difference between \([a_{ij}]\) and \([b_{ij}]\) and is denoted by \([a_{ij}] {\tilde{\setminus }} [b_{ij}]\).

Proposition 5

Let \([a_{ij}] \in D_E[U]\). Then,

  1. i.

    \([a_{ij}] {\tilde{\setminus }} \left[ ^{0}_{1}\right] = [a_{ij}]\)

  2. ii.

    \([a_{ij}] {\tilde{\setminus }} \left[ ^{1}_{0}\right] = \left[ ^{0}_{1}\right] \)

  3. iii.

    \(\left[ ^{0}_{1}\right] {\tilde{\setminus }} [a_{ij}]= \left[ ^{0}_{1}\right] \)

Note 3

The difference operation does not provide associative and commutative properties.

Definition 14

Let \([a_{ij}],[b_{ij}] \in D_E[U]\). For all i and j, if \(\alpha _{ij}^{b}=\beta _{ij}^{a}\) and \(\beta _{ij}^{b}=\alpha _{ij}^{a}\), then \([b_{ij}]\) is complement of \([a_{ij}]\) and is denoted by \([a_{ij}]^{{\tilde{c}}}\) or \([a^{{\tilde{c}}}_{ij}]\). It is clear that, \([a_{ij}]^{{\tilde{c}}}=\left[ ^{1}_{0}\right] {\tilde{\setminus }} [a_{ij}]\).

Proposition 6

Let \([a_{ij}],[b_{ij}] \in D_E[U]\). Then,

  1. i.

    \(([a_{ij}]^{\tilde{c}})^{\tilde{c}}= [a_{ij}]\)

  2. ii.

    \(\left[ ^{0}_{1}\right] ^{\tilde{c}} = \left[ ^{1}_{0}\right] \)

  3. iii.

    \([a_{ij}] {\tilde{\setminus }} [b_{ij}]=[a_{ij}] {\tilde{\cap }} [b_{ij}]^{\tilde{c}}\)

  4. iv.

    \([a_{ij}] {\tilde{\subseteq }} [b_{ij}] \Rightarrow [b_{ij}]^{{\tilde{c}}} {\tilde{\subseteq }} [a_{ij}]^{{\tilde{c}}}\)

Proposition 7

Let \([a_{ij}],[b_{ij}] \in D_E[U]\). Then, the following De Morgan’s laws are valid:

  1. i.

    \(([a_{ij}] {\tilde{\cup }} [b_{ij}])^{\tilde{c}}= [a_{ij}]^{\tilde{c}} {\tilde{\cap }} [b_{ij}]^{\tilde{c}}\)

  2. ii.

    \(([a_{ij}] {\tilde{\cap }} [b_{ij}])^{\tilde{c}} = [a_{ij}]^{\tilde{c}} {\tilde{\cup }} [b_{ij}]^{\tilde{c}}\)

Proof

i. :

Let \([a_{ij}],[b_{ij}] \in D_E[U]\). Then,

$$\begin{aligned} ([a_{ij}] {\tilde{\cup }} [b_{ij}])^{\tilde{c}} = \left[ ^{\sup \{\alpha _{ij}^{a},\alpha _{ij}^{b}\}}_{\inf \{\beta _{ij}^{a},\beta _{ij}^{b}\}}\right] ^{\tilde{c}} = \left[ ^{\inf \{\beta _{ij}^{a},\beta _{ij}^{b}\}}_{\sup \{\alpha _{ij}^{a},\alpha _{ij}^{b}\}}\right] = \left[ ^{\beta _{ij}^{a}}_{\alpha _{ij}^{a}}\right] {\tilde{\cap }}\left[ ^{\beta _{ij}^{b}}_{\alpha _{ij}^{b}}\right] = \left[ a_{ij}\right] ^{\tilde{c}}{\tilde{\cap }} \left[ b_{ij}\right] ^{\tilde{c}}\\ \end{aligned}$$

\(\square \)

Definition 15

Let \([a_{ij}],[b_{ij}],[c_{ij}] \in D_E[U]\). For all i and j, if

$$\begin{aligned} \alpha _{ij}^{c}=\sup \left\{ \inf \{\alpha _{ij}^{a},\beta _{ij}^{b}\},\inf \{\alpha _{ij}^{b},\beta _{ij}^{a}\}\right\} \quad \text {and} \quad \beta _{ij}^{c}=\inf \left\{ \sup \{\beta _{ij}^{a},\alpha _{ij}^{b}\},\sup \{\beta _{ij}^{b},\alpha _{ij}^{a}\}\right\} \end{aligned}$$

then \([c_{ij}]\) is called symmetric difference between \([a_{ij}]\) and \([b_{ij}]\) and is denoted by \([a_{ij}] {\tilde{\triangle }} [b_{ij}]\).

Proposition 8

Let \([a_{ij}],[b_{ij}] \in D_E[U]\). Then,

  1. i.

    \([a_{ij}] {\tilde{\triangle }} \left[ ^{0}_{1}\right] = [a_{ij}]\)

  2. ii.

    \([a_{ij}] {\tilde{\triangle }} \left[ ^{1}_{0}\right] = [a_{ij}]^{\tilde{c}}\)

  3. iii.

    \([a_{ij}] {\tilde{\triangle }} [b_{ij}]=[b_{ij}] {\tilde{\triangle }} [a_{ij}]\)

Note 4

The symmetric difference operation does not provide associative property.

Example 5

For \([a_{ij}]\) and \([b_{ij}]\) in Example 4, \([a_{ij}] {\tilde{\setminus }} [b_{ij}]\) and \([a_{ij}] {\tilde{\triangle }} [b_{ij}]\) are as follows:

$$\begin{aligned} {[}a_{ij}] {\tilde{\setminus }} [b_{ij}]=\left[ \begin{array}{ccc} ^{[0.1,0.2]}_{[0.1,0.6]} \,\,&{} ^{0.3}_{0.4} \,\,&{} ^{[0,0.1]}_{[0.2,0.8]} \\ ^{0}_{1} \,\,&{} ^{[0,0.2]}_{[0.4,0.6]} \,\,&{} ^{0.1}_{0.6} \\ ^{[0.1,0.2]}_{[0.4,0.8]} \,\,&{} ^{0.2}_{0.7} \,\,&{} ^{[0,0.4]}_{[0.1,0.3]} \\ \end{array}\right] \quad \text {and } \quad [a_{ij}] {\tilde{\triangle }} [b_{ij}]=\left[ \begin{array}{ccc} ^{[0.1,0.3]}_{[0.1,0.4]} \,\,&{} ^{[0.3,0.4]}_{[0.3,0.4]} \,\,&{} ^{[0.1,0.2]}_{[0.2,0.4]} \\ ^{[0.3,0.5]}_{[0.1,0.2]} \,\,&{} ^{[0.1,0.3]}_{[0.1,0.3]} \,\,&{} ^{[0.1,0.4]}_{0.5} \\ ^{[0.1,0.3]}_{[0.4,0.7]} \,\,&{} ^{0.2}_{0.7} \,\,&{} ^{[0,0.4]}_{[0.1,0.3]} \\ \end{array}\right] \end{aligned}$$

Definition 16

Let \([a_{ij}],[b_{ij}] \in D_E[U]\). If \([a_{ij}] {\tilde{\cap }} [b_{ij}]=\left[ ^{0}_{1}\right] \), then \([a_{ij}]\) and \([b_{ij}]\) are called disjoint.

Definition 17

Let \([a_{ij}]_{m \times {n_1}} \in D_{E_1}[U]\), \([b_{ik}]_{m \times {n_2}} \in D_{E_2}[U]\), and \([c_{ip}]_{m \times {n_1n_2}} \in D_{E_1 \times E_2}[U]\) such that \(p=n_2(j-1)+k\). For all i and p, if \(\alpha _{ip}^{c}=\inf \{\alpha _{ij}^{a},\alpha _{ik}^{b}\}\) and \(\beta _{ip}^{c}=\sup \{\beta _{ij}^{a},\beta _{ik}^{b}\}\), then \([c_{ip}]\) is called AND-product of \([a_{ij}]\) and \([b_{ik}]\) and is denoted by \([a_{ij}]{\wedge }[b_{ik}]\).

Definition 18

Let \([a_{ij}]_{m \times {n_1}} \in D_{E_1}[U]\), \([b_{ik}]_{m \times {n_2}} \in D_{E_2}[U]\), and \([c_{ip}]_{m \times {n_1n_2}} \in D_{E_1 \times E_2}[U]\) such that \(p=n_2(j-1)+k\). For all i and p, if \(\alpha _{ip}^{c}=\sup \{\alpha _{ij}^{a},\alpha _{ik}^{b}\}\) and \(\beta _{ip}^{c}=\inf \{\beta _{ij}^{a},\beta _{ik}^{b}\}\), then \([c_{ip}]\) is called OR-product of \([a_{ij}]\) and \([b_{ik}]\) and is denoted by \([a_{ij}]{\vee }[b_{ik}]\).

Definition 19

Let \([a_{ij}]_{m \times {n_1}} \in D_{E_1}[U]\), \([b_{ik}]_{m \times {n_2}} \in D_{E_2}[U]\), and \([c_{ip}]_{m \times {n_1n_2}} \in D_{E_1 \times E_2}[U]\) such that \(p=n_2(j-1)+k\). For all i and p, if \(\alpha _{ip}^{c}=\inf \{\alpha _{ij}^{a},\beta _{ik}^{b}\}\) and \(\beta _{ip}^{c}=\sup \{\beta _{ij}^{a},\alpha _{ik}^{b}\}\), then \([c_{ip}]\) is called ANDNOT-product of \([a_{ij}]\) and \([b_{ik}]\) and is denoted by \([a_{ij}]{{\overline{\wedge }}}[b_{ik}]\).

Definition 20

Let \([a_{ij}]_{m \times {n_1}} \in D_{E_1}[U]\), \([b_{ik}]_{m \times {n_2}} \in D_{E_2}[U]\), and \([c_{ip}]_{m \times {n_1n_2}} \in D_{E_1 \times E_2}[U]\) such that \(p=n_2(j-1)+k\). For all i and p, if \(\alpha _{ip}^{c}=\sup \{\alpha _{ij}^{a},\beta _{ik}^{b}\}\) and \(\beta _{ip}^{c}=\inf \{\beta _{ij}^{a},\alpha _{ik}^{b}\}\), then \([c_{ip}]\) is called ORNOT-product of \([a_{ij}]\) and \([b_{ik}]\) and is denoted by \([a_{ij}]{{\underline{\vee }}}[b_{ik}]\).

Example 6

For \([a_{ij}]\) and \([b_{ik}]\) in Example 4, \([a_{ij}] {\overline{\wedge }} [b_{ik}]\) is as follows:

$$\begin{aligned} {[}a_{ij}] {{\overline{\wedge }}} [b_{ik}]=\left[ \begin{array}{ccccccccc} ^{[0.1,0.2]}_{[0.1,0.6]} \,\,&{} ^{[0.2,0.4]}_{[0.2,0.6]} \,\,&{} ^{[0,0.1]}_{[0.2,0.8]} \,\,&{} ^{[0.1,0.2]}_{0.4} \,\,&{} ^{0.3}_{0.4} \,\,&{} ^{[0,0.1]}_{[0.4,0.8]} \,\,&{} ^{[0.1,0.2]}_{[0.1,0.3]} \,\,&{} ^{[0.3,0.4]}_{[0.2,0.4]} \,\,&{} ^{[0,0.1]}_{[0.2,0.8]} \\ ^{0}_{1} \,\,&{} ^{0}_{1} \,\,&{} ^{0}_{1} \,\,&{} ^{[0,0.2]}_{[0.4,0.6]} \,\,&{} ^{[0,0.2]}_{[0.4,0.6]} \,\,&{} ^{[0,0.1]}_{0.6} \,\,&{} ^{[0.1,0.2]}_{[0.3,0.5]} \,\,&{} ^{[0.1,0.2]}_{[0.1,0.4]} \,\,&{} ^{0.1}_{0.6} \\ ^{[0.1,0.2]}_{[0.4,0.8]} \,\,&{} ^{[0.5,0.7]}_{[0,0.3]} \,\,&{} ^{[0,0.4]}_{[0,0.3]} \,\,&{} ^{[0.1,0.2]}_{[0.7,0.8]} \,\,&{} ^{0.2}_{0.7} \,\,&{} ^{[0,0.2]}_{0.7} \,\,&{} ^{[0.1,0.2]}_{[0.4,0.8]} \,\,&{} ^{[0.5,0.6]}_{[0.1,0.3]} \,\,&{} ^{[0,0.4]}_{[0.1,0.3]} \\ \end{array}\right] \end{aligned}$$

Proposition 9

Let \([a_{ij}]_{m \times {n_1}} \in D_{E_1}[U]\), \([b_{ik}]_{m \times {n_2}} \in D_{E_2}[U]\), and \([c_{il}]_{m \times {n_3}} \in D_{E_3}[U]\). Then,

  1. i.

    \(([a_{ij}]\wedge [b_{ik}])\wedge [c_{il}] = [a_{ij}] \wedge ([b_{ik}] \wedge [c_{il}])\)

  2. ii.

    \(([a_{ij}]\vee [b_{ik}])\vee [c_{il}] = [a_{ij}] \vee ([b_{ik}] \vee [c_{il}])\)

Proof

i. :

Let \([a_{ij}]_{m \times {n_1}} \in D_{E_1}[U]\), \([b_{ik}]_{m \times {n_2}} \in D_{E_2}[U]\), \([c_{il}]_{m \times {n_3}} \in D_{E_3}[U]\), \([a_{ij}] \wedge [b_{ik}]=[d_{ip}]\), \([b_{ik}] \wedge [c_{il}]=[e_{ir}]\), \(([a_{ij}] \wedge [b_{ik}]) \wedge [c_{il}]=[f_{is}]\), and \([a_{ij}] \wedge ([b_{ik}] \wedge [c_{il}])=[h_{it}]\). Therefore, \([d_{ip}]_{m \times {n_1n_2}} \in D_{E_1 \times E_2}[U]\), \([e_{ir}]_{m \times {n_2n_3}} \in D_{E_2 \times E_3}[U]\), and \([f_{is}]_{m \times {n_1n_2n_3}},[h_{it}]_{m \times {n_1n_2n_3}} \in D_{E_1 \times E_2 \times E_3}[U]\). Because of Definition 17, since \(p=n_2(j-1)+k\) and \(s=n_3(p-1)+l\), then

$$\begin{aligned} s=n_3n_2(j-1)+n_3(k-1)+l \end{aligned}$$

Similarly, because of Definition  17, since \(r=n_3(k-1)+l\) and \(t=n_2n_3(j-1)+r\), then

$$\begin{aligned} t=n_2n_3(j-1)+n_3(k-1)+l \end{aligned}$$

Moreover, for all i, s, and t, since

$$\begin{aligned} \alpha _{is}^{f}=\inf \{\inf \{\alpha _{ij}^{a},\alpha _{ik}^{b}\},\alpha _{il}^{c}\} \quad \text {and} \quad \beta _{is}^{f}=\sup \{\sup \{\beta _{ij}^{a},\beta _{ik}^{b}\},\beta _{il}^{c}\} \end{aligned}$$

and

$$\begin{aligned} \alpha _{it}^{h}=\inf \{\alpha _{ij}^{a},\inf \{\alpha _{ik}^{b},\alpha _{il}^{c}\}\} \quad \text {and} \quad \beta _{it}^{h}=\sup \{\beta _{ij}^{a},\sup \{\beta _{ik}^{b},\beta _{il}^{c}\}\} \end{aligned}$$

then \(\alpha _{is}^{f}=\alpha _{it}^{h}\) and \(\beta _{is}^{f}=\beta _{it}^{h}\). Thus, \(([a_{ij}] \wedge [b_{ik}]) \wedge [c_{il}]=[a_{ij}] \wedge ([b_{ik}] \wedge [c_{il}])\).

\(\square \)

Proposition 10

Let \([a_{ij}]_{m \times {n_1}} \in D_{E_1}[U]\) and \([b_{ik}]_{m \times {n_2}} \in D_{E_2}[U]\). Then, the following De Morgan’s laws are valid:

  1. i.

    \(([a_{ij}] \vee [b_{ik}])^{{\tilde{c}}}=[a_{ij}]^{{\tilde{c}}} \wedge [b_{ik}]^{{\tilde{c}}}\)

  2. ii.

    \(([a_{ij}] \wedge [b_{ik}])^{{\tilde{c}}}=[a_{ij}]^{{\tilde{c}}} \vee [b_{ik}]^{{\tilde{c}}}\)

  3. iii.

    \(([a_{ij}] \, {{\underline{\vee }}} \, [b_{ik}])^{{\tilde{c}}}=[a_{ij}]^{{\tilde{c}}} \, {{\overline{\wedge }}} \, [b_{ik}]^{{\tilde{c}}}\)

  4. iv.

    \(([a_{ij}] \, {{\overline{\wedge }}} \, [b_{ik}])^{{\tilde{c}}}=[a_{ij}]^{{\tilde{c}}} \, {{\underline{\vee }}} \, [b_{ik}]^{{\tilde{c}}}\)

Proof

iv. :

Let \([a_{ij}]_{m \times {n_1}} \in D_{E_1}[U]\) and \([b_{ik}]_{m \times {n_2}} \in D_{E_2}[U]\). Then,

$$\begin{aligned} ([a_{ij}] {{\overline{\wedge }}} [b_{ik}])^{\tilde{c}} = \left[ ^{\inf \{\alpha _{ij}^{a},\beta _{ik}^{b}\}}_{\sup \{\beta _{ij}^{a},\alpha _{ik}^{b}\}}\right] ^{\tilde{c}} = \left[ ^{\sup \{\beta _{ij}^{a},\alpha _{ik}^{b}\}}_{\inf \{\alpha _{ij}^{a},\beta _{ik}^{b}\}}\right] = \left[ a_{ij}\right] ^{\tilde{c}} {{\underline{\vee }}} \left[ b_{ij}\right] ^{\tilde{c}} \end{aligned}$$

\(\square \)

Note 5

The aforesaid products of d-matrices do not provide distributive property upon each other and commutative property. Moreover, ANDNOT-product and ORNOT-product do not provide associative property.

4 The configured soft decision-making method

This section first configures the SDM method (Aydın and Enginoğlu 2021a) to operate it in d-matrices space. Thus, we can employ this method in the presence of decision-making problems. The configured method is used to model a problem containing parameters and alternatives with multiple intuitionistic fuzzy values. This method consists of a pre-processing step and the main process steps. In the pre-processing, the multiple intuitionistic fuzzy values are inputted for each parameter and the alternatives corresponding to the parameters. In the first step of the main process, a d-matrix is constructed using the membership function, the non-membership function, and the multiple intuitionistic fuzzy values. In the second, a column matrix with the ivif-values is obtained by weighting the non-zero-indexed rows of the d-matrix with the zero-indexed one. In the third step, a score matrix is attained with the difference between membership and non-membership values in each entry of this matrix. Fourthly, an interval-valued fuzzy decision set over a set of alternatives is produced by normalising the score values and translating them to a closed classical subinterval of [0, 1]. In the final step, the optimal alternatives are selected through the linear ordering relation (Xu and Yager 2006). Henceforth, \(I_{n}=\{1,2,3,\dots ,n\}\) and \(I^{*}_{n}=\{0,1,2,\dots ,n\}\).

Algorithm Steps of the Configured Method

Input Step.:

Input the values \({\mu ^{ij}_{t}}\) and \({\nu ^{ij}_{t}}\) such that \(i \in I^{*}_{m-1}\), \(j \in I_{n}\), and \(t \in I_{s}\)

Main Steps

Step 1. :

Construct a d-matrix \([a_{ij}]_{m \times n}\) defined by \(a_{ij}:={^{\alpha ^{a}_{ij}}_{\beta ^{a}_{ij}}}\)

Here, \({\pi ^{ij}_{t}}=1-{\mu ^{ij}_{t}}-{\nu ^{ij}_{t}}\) , \(I=\left\{ p : {\mu ^{ij}_{p}}=\max \limits _{t}{\mu ^{ij}_{t}}\right\} \), \(J=\left\{ r : {\nu ^{ij}_{r}}=\max \limits _{t}{\nu ^{ij}_{t}}\right\} \), \(i \in I^{*}_{m-1}\), \(j \in I_{n}\), and \(t \in I_{s}\) such that

$$\begin{aligned} \alpha ^a_{ij}:=\left[ \frac{\min \limits _{t}{\mu ^{ij}_{t}}}{\max \limits _{t}{\mu ^{ij}_{t}} + \max \limits _{t}{\nu ^{ij}_{t}} + \min \left\{ \min \limits _{p \in I} {\pi ^{ij}_{p}},\min \limits _{r \in J} {\pi ^{ij}_{r}}\right\} } , \right. \left. \frac{\max \limits _{t}{\mu ^{ij}_{t}}}{\max \limits _{t}{\mu ^{ij}_{t}} + \max \limits _{t}{\nu ^{ij}_{t}} + \min \left\{ \min \limits _{p \in I} {\pi ^{ij}_{p}},\min \limits _{r \in J} {\pi ^{ij}_{r}}\right\} }\right] \end{aligned}$$

and

$$\begin{aligned} \beta ^a_{ij}:=\left[ \frac{\min \limits _{t}{\nu ^{ij}_{t}}}{\max \limits _{t}{\mu ^{ij}_{t}} + \max \limits _{t}{\nu ^{ij}_{t}} + \min \left\{ \min \limits _{p \in I} {\pi ^{ij}_{p}},\min \limits _{r \in J} {\pi ^{ij}_{r}}\right\} } , \right. \left. \frac{\max \limits _{t}{\nu ^{ij}_{t}}}{\max \limits _{t}{\mu ^{ij}_{t}} + \max \limits _{t}{\nu ^{ij}_{t}} + \min \left\{ \min \limits _{p \in I} {\pi ^{ij}_{p}},\min \limits _{r \in J} {\pi ^{ij}_{r}}\right\} }\right] \end{aligned}$$
Step 2. :

Obtain the ivif-valued column matrix \(\left[ ^{\alpha _{i1}}_{\beta _{i1}}\right] _{(m-1) \times 1}\) defined by

$$\begin{aligned} \alpha _{i1}:=\frac{1}{\lambda } \sum _{j=1}^{n} \alpha ^{a}_{0j} \alpha ^{a}_{ij} \quad \text {and} \quad \beta _{i1}:=\frac{1}{\lambda } \sum _{j=1}^{n} \beta ^{a}_{0j} \beta ^{a}_{ij} \end{aligned}$$

such that \(i \in I_{m-1}\) Here,

$$\begin{aligned} \lambda :=\frac{1}{2} \sum _{j=1}^{n} \left( 1 + \frac{(\alpha ^{a}_{0j})^{-} + (\alpha ^{a}_{0j})^{+}}{2} - \frac{(\beta ^{a}_{0j})^{-} + (\beta ^{a}_{0j})^{+}}{2}\right) \end{aligned}$$
Step 3. :

Obtain the score matrix \([s_{i1}]_{(m-1) \times 1}\) defined by \(s_{i1}:=\alpha _{i1}-\beta _{i1}\) such that \(i \in I_{m-1}\)

Step 4. :

Obtain the decision set \(\{^{d(u_k)}u_k | u_k\in U\}\) such that

$$\begin{aligned} d(u_k)=\left\{ \begin{array}{rl}\left[ \frac{s_{k1}^- + |\min \limits _{i}s_{i1}^-|}{\max \limits _{i}s_{i1}^+ + |\min \limits _{i}s_{i1}^-|},\frac{s_{k1}^+ + |\min \limits _{i}s_{i1}^-|}{\max \limits _{i}s_{i1}^+ + |\min \limits _{i}s_{i1}^-|}\right] , &{} \max \limits _{i}s_{i1}^+ + |\min \limits _{i}s_{i1}^-| \ne 0 \\ {[}1,1],&{} \max \limits _{i}s_{i1}^+ + |\min \limits _{i}s_{i1}^-| = 0 \end{array}\right. \end{aligned}$$
Step 5. :

Select the optimal elements among the alternatives via linear ordering relation (Xu and Yager 2006)

$$\begin{aligned}&\left[ \gamma ^{-}_1,\gamma ^{+}_1\right] \le _{_{XY}} \left[ \gamma ^{-}_2,\gamma ^{+}_2\right] \\&\Leftrightarrow \left[ \left( \gamma ^{-}_1 + \gamma ^{+}_1 < \gamma ^{-}_2 + \gamma ^{+}_2 \right) \vee \left( \gamma ^{-}_1 + \gamma ^{+}_1 = \gamma ^{-}_2 + \gamma ^{+}_2 \wedge \gamma ^{-}_1 - \gamma ^{+}_1 \le \gamma ^{-}_2 - \gamma ^{+}_2 \right) \right] \end{aligned}$$

Here, \(\alpha ^{a}_{0j}=[(\alpha ^{a}_{0j})^{-},(\alpha ^{a}_{0j})^{+}]\), \(\beta ^{a}_{0j}=[(\beta ^{a}_{0j})^{-},(\beta ^{a}_{0j})^{+}]\), and \(s_{i1}=[s_{i1}^-,s_{i1}^+]\).

5 An application of the configured method to performance-based value assignment problem

In this section, we apply the configured method to the PVA problem for seven known filters used in image denoising, namely Based on Pixel Density Filter (BPDF) (Erkan and Gökrem 2018), Modified Decision-Based Unsymmetric Trimmed Median Filter (MDBUTMF) (Esakkirajan et al. 2011), Decision-Based Algorithm (DBAIN) (Srinivasan and Ebenezer 2007), Noise Adaptive Fuzzy Switching Median Filter (NAFSMF) (Toh and Isa 2010), Different Applied Median Filter (DAMF) (Erkan et al. 2018), Adaptive Weighted Mean Filter (AWMF) (Tang et al. 2016), and Adaptive Riesz Mean Filter (ARmF) (Enginoğlu et al. 2019b). Hereinafter, let \(U=\{u_1,u_2,u_3,u_4,u_5,u_6,u_7\}\) be an alternative set such that \(u_1=\) “BPDF”, \(u_2=\) “MDBUTMF”, \(u_3=\) “DBAIN”, \(u_4=\) “NAFSMF”, \(u_5=\) “DAMF”, \(u_6=\) “AWMF”, and \(u_7=\) “ARmF”. Moreover, let \(E=\{x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8,x_9\}\) be a parameter set determined by a decision-maker such that \(x_1=\)noise density \(10\%\)”, \(x_2=\)noise density \(20\%\)”, \(x_3=\)noise density \(30\%\)”, \(x_4=\)noise density \(40\%\)”, \(x_5=\)noise density \(50\%\)”, \(x_6=\)noise density \(60\%\)”, \(x_7=\)noise density \(70\%\)”, \(x_8=\)noise density \(80\%\)”, and \(x_9=\)noise density \(90\%\)”.

First, we consider 20 traditional test images, i.e. “Lena”, “Cameraman”, “Barbara”, “Baboon”, “Peppers”, “Living Room”, “Lake”, “Plane”, “Hill”, “Pirate”, “Boat”, “House”, “Bridge”, “Elaine”, “Flintstones”, “Flower”, “Parrot”, “Dark-Haired Woman”, “Blonde Woman”, and “Einstein”. To this end, we present the noise-removal performance values of the aforesaid filters by Structural Similarity (SSIM) (Wang et al. 2004) for the images at noise densities ranging from \(10\%\) to \(90\%\), in Tables 12, 3, and 4, respectively. Moreover, we obtain the results herein by MATLAB R2021a. When the SSIM values provided in the tables are examined, it is observed that ARmF absolutely performs better than the other filters at all the noise densities and for all the images. However, it is non-obvious which one is the second and third etc. Our motivation is to overcome this problem.

Table 1 SSIM results of the filters for Lena, cameraman, Barbara, baboon, and peppers images
Table 2 SSIM results of the filters for living room, lake, plane, hill, pirate, boat, and house images
Table 3 SSIM results of the filters for bridge, Elaine, flintstones, flower, parrot, dark-haired woman, and blonde woman images
Table 4 SSIM results of the filters for Einstein image

For the problem, let \((\mu ^{ij}_{t})\) be ordered-vigintuple such that \(\mu ^{ij}_{t}\) corresponds to the SSIM results in Tables 12, 3, and 4 obtained by \(t^{th}\) image for \(i^{th}\) filter at \(j^{th}\) noise density. Here, since \({\nu ^{ij}_{t}}=1-{\mu ^{ij}_{t}}\) and \({\pi ^{ij}_{t}}=0\) such that \(i \in I_{7}\), \(j \in I_{9}\), and \(t \in I_{20}\), then for d-matrix \([a_{ij}]\),

$$\begin{aligned} \alpha ^a_{ij}:=\left[ \frac{\min \limits _{t}{\mu ^{ij}_{t}}}{\max \limits _{t}{\mu ^{ij}_{t}} + \max \limits _{t}\{1-{\mu ^{ij}_{t}}\}} , \frac{\max \limits _{t}{\mu ^{ij}_{t}}}{\max \limits _{t}{\mu ^{ij}_{t}} + \max \limits _{t}\{1-{\mu ^{ij}_{t}}\}} \right] \end{aligned}$$

and

$$\begin{aligned} \beta ^a_{ij}:=\left[ \frac{\min \limits _{t}\{1-{\mu ^{ij}_{t}}\}}{\max \limits _{t}{\mu ^{ij}_{t}} + \max \limits _{t}\{1-{\mu ^{ij}_{t}}\}} , \frac{\max \limits _{t}\{1-{\mu ^{ij}_{t}}\}}{\max \limits _{t}{\mu ^{ij}_{t}} + \max \limits _{t}\{1-{\mu ^{ij}_{t}}\}} \right] \end{aligned}$$

For example, the ordered-vigintuple

$$\begin{aligned} \begin{array}{rl}(\mu ^{54}_{t}) =&{}(0.9488,0.9759,0.9013,0.9356,0.9110,0.9152,0.9285,0.9648,0.9181,0.9332, \\ {} &{}\,\,\, 0.9123,0.9861,0.8953,0.8961,0.9173,0.9513,0.9563,0.9743,0.9053,0.9445 )\end{array} \end{aligned}$$

indicates SSIM results of DAMF for 20 traditional test images at noise density \(40\%\). Since

$$\begin{aligned}&\alpha ^a_{54}=\left[ \frac{\min \limits _{t}{\mu ^{54}_{t}}}{\max \limits _{t}{\mu ^{54}_{t}} + \max \limits _{t}\{1-{\mu ^{54}_{t}}\}} , \frac{\max \limits _{t}{\mu ^{54}_{t}}}{\max \limits _{t}{\mu ^{54}_{t}} + \max \limits _{t}\{1-{\mu ^{54}_{t}}\}}\right] \\&\quad = \left[ \frac{0.8953}{0.9861 + 0.1047} , \frac{0.9861}{0.9861 + 0.1047}\right] = [0.8207,0.9040] \end{aligned}$$

and

$$\begin{aligned}&\beta ^a_{54}=\left[ \frac{\min \limits _{t}\{1-{\mu ^{54}_{t}}\}}{\max \limits _{t}{\mu ^{54}_{t}} + \max \limits _{t}\{1-{\mu ^{54}_{t}}\}} , \frac{\max \limits _{t}\{1-{\mu ^{54}_{t}}\}}{\max \limits _{t}{\mu ^{54}_{t}} + \max \limits _{t}\{1-{\mu ^{54}_{t}}\}}\right] \\&\quad = \left[ \frac{0.0139}{0.9861 + 0.1047} , \frac{0.1047}{0.9861 + 0.1047}\right] = [0.0127,0.0960] \end{aligned}$$

then \(a_{54}={^{[0.8207,0.9040]}_{[0.0127,0.0960]}}\). Here, [0.8207, 0.9040] signifies that the success of DAMF on image denoising at noise density \(40\%\) ranges from approximately \(82\%\) to \(90\%\). Moreover, [0.0127, 0.0960] means that the rate of DAMF’s failure in image denoising at the same noise density occurs approximately between \(1\%\) and \(9\%\). Similarly, the all rows of the d-matrix \([a_{ij}]\) but the zero-indexed row can be obtained. Besides, suppose that the noise-removal performances of the filters are more significant in high noise densities, in which noisy pixels outnumber uncorrupted pixels, then performance-based success would be more important in the presence of high noise densities than of the others. For example, let

$$\begin{aligned} {[}a_{0j}]= \left[ {^{[0,0.01]}_{[0.9,0.95]}} \,\,\, {^{[0,0.05]}_{[0.85,0.9]}} \,\,\, {^{[0,0.1]}_{[0.8,0.85]}} \,\,\, {^{[0.05,0.35]}_{[0.25,0.5]}} \,\,\, {^{[0.2,0.45]}_{[0.2,0.45]}} \,\,\, {^{[0.25,0.5]}_{[0.05,0.35]}} \,\,\, {^{[0.8,0.85]}_{[0,0.1]}} \,\,\, {^{[0.85,0.9]}_{[0,0.05]}} \,\,\, {^{[0.9,0.95]}_{[0,0.01]}} \right] \end{aligned}$$

Thus, the d-matrix \([a_{ij}]\), modelling the SSIM values provided in Tables 1, 2, 3, and 4, is as follows:

$$\begin{aligned}&\begin{array}{rlll} [a_{ij}]=&{}\left[ \begin{array}{ccccc} ^{[0,0.01]}_{[0.9,0.95]} &{} ^{[0,0.05]}_{[0.85,0.9]} &{} ^{[0,0.1]}_{[0.8,0.85]} &{} ^{[0.05,0.35]}_{[0.25,0.5]} &{} ^{[0.2,0.45]}_{[0.2,0.45]} \\ ^{[0.9392,0.9666]}_{[0.0060,0.0334]} \,\,\, &{} ^{[0.8872,0.9368]}_{[0.0135,0.0632]} \,\,\, &{} ^{[0.8145,0.8948]}_{[0.0248,0.1052]} \,\,\, &{} ^{[0.7330,0.8465]}_{[0.0399,0.1535]} \,\,\, &{} ^{[0.6392,0.7873]}_{[0.0646,0.2127]} \\ ^{[0.9355,0.9653]}_{[0.0049,0.0347]} \,\,\, &{} ^{[0.8991,0.9248]}_{[0.0496,0.0752]} \,\,\, &{} ^{[0.7221,0.8002]}_{[0.1216,0.1998]} \,\,\, &{} ^{[0.6937,0.7736]}_{[0.1465,0.2264]} \,\,\, &{} ^{[0.7155,0.8046]}_{[0.1063,0.1954]} \\ ^{[0.9383,0.9676]}_{[0.0030,0.0324]} \,\,\, &{} ^{[0.8978,0.9451]}_{[0.0076,0.0549]} \,\,\, &{} ^{[0.8388,0.9116]}_{[0.0156,0.0884 ]} \,\,\, &{} ^{[0.7669,0.8702]}_{[0.0266,0.1298]} \,\,\, &{} ^{[0.6822,0.8205]}_{[0.0412,0.1795]} \\ ^{[0.9319,0.9618]}_{[0.0084,0.0382]} \,\,\, &{} ^{[0.8682,0.9261]}_{[0.0159,0.0739]} \,\,\, &{} ^{[0.7994,0.8875]}_{[0.0243,0.1125]} \,\,\, &{} ^{[0.7325,0.8505]}_{[0.0314,0.1495]} \,\,\, &{} ^{[0.6648,0.8125]}_{[0.0397,0.1875]} \\ ^{[0.9433,0.9708]}_{[0.0018,0.0292]} \,\,\, &{} ^{[0.9119,0.9538]}_{[0.0043,0.0462]} \,\,\, &{} ^{[0.8710,0.9314]}_{[0.0082,0.0686]} \,\,\, &{} ^{[0.8207,0.9040]}_{[0.0127,0.0960]} \,\,\, &{} ^{[0.7623,0.8721]}_{[0.0182,0.1279]} \\ ^{[0.9200,0.9568]}_{[0.0065,0.0432]} \,\,\, &{} ^{[0.9003,0.9465]}_{[0.0073,0.0535]} \,\,\, &{} ^{[0.8610,0.9261]}_{[0.0089,0.0739]} \,\,\, &{} ^{[0.8187,0.9038]}_{[0.0112,0.0962]} \,\,\, &{} ^{[0.7673,0.8762]}_{[0.0148,0.1238]} \\ ^{[0.9463,0.9725]}_{[0.0013,0.0275]} \,\,\, &{} ^{[0.9131,0.9551]}_{[0.0028,0.0449]} \,\,\, &{} ^{[0.8687,0.9318]}_{[0.0051,0.0682]} \,\,\, &{} ^{[0.8199,0.9060]}_{[0.0079,0.0940]} \,\,\, &{} ^{[0.7682,0.8780]}_{[0.0122,0.1220]} \\ \end{array}\right. \\ \end{array} \\&\qquad \qquad \qquad \begin{array}{rlll} &{}\left. \begin{array}{cccc} ^{[0.25,0.5]}_{[0.05,0.35]} &{} ^{[0.8,0.85]}_{[0,0.1]} &{} ^{[0.85,0.9]}_{[0,0.05]} &{} ^{[0.9,0.95]}_{[0,0.01]} \\ ^{[0.5210,0.7135]}_{[0.0940,0.2865]} \,\,\, &{} ^{[0.3982,0.6263]}_{[0.1456,0.3737]} \,\,\, &{} ^{[0.2732,0.5243]}_{[0.2245,0.4757]} \,\,\, &{} ^{[0.0909,0.3687]}_{[0.3535,0.6313]} \\ ^{[0.6376,0.7956]}_{[0.0464,0.2044]} \,\,\, &{} ^{[0.5572,0.7555]}_{[0.0461,0.2445]} \,\,\, &{} ^{[0.4747,0.6836]}_{[0.1075,0.3164]} \,\,\, &{} ^{[0.3096,0.4230]}_{[0.4635,0.5770]} \\ ^{[0.5855,0.7614]}_{[0.0628,0.2386]} \,\,\, &{} ^{[0.4766,0.6902]}_{[0.0962,0.3098]} \,\,\, &{} ^{[0.3680,0.6139]}_{[0.1401,0.3861]} \,\,\, &{} ^{[0.2565,0.5274]}_{[0.2017,0.4726]} \\ ^{[0.5913,0.7713]}_{[0.0488,0.2287]} \,\,\, &{} ^{[0.5162,0.7269]}_{[0.0623,0.2731]} \,\,\, &{} ^{[0.4384,0.6781]}_{[0.0823,0.3219]} \,\,\, &{} ^{[0.3455,0.5908]}_{[0.1640,0.4092]} \\ ^{[0.6936,0.8343]}_{[0.0250,0.1657]} \,\,\, &{} ^{[0.6163,0.7907]}_{[0.0349,0.2093]} \,\,\, &{} ^{[0.5247,0.7378]}_{[0.0491, 0.2622]} \,\,\, &{} ^{[0.4030,0.6588]}_{[0.0854,0.3412]} \\ ^{[0.7018,0.8405]}_{[0.0207,0.1595]} \,\,\, &{} ^{[0.6252,0.7973]}_{[0.0307,0.2027]} \,\,\, &{} ^{[0.5308,0.7428]}_{[0.0452,0.2572]} \,\,\, &{} ^{[0.4058,0.6638]}_{[0.0781,0.3362]} \\ ^{[0.7135,0.8475]}_{[0.0186,0.1525]} \,\,\, &{} ^{[0.6392,0.8051]}_{[0.0290,0.1949]} \,\,\, &{} ^{[0.5401,0.7481]}_{[0.0439,0.2519]} \,\,\, &{} ^{[0.4101,0.6665]}_{[0.0772,0.3335]} \\ \end{array}\right] \end{array} \end{aligned}$$

Second, we apply the configured method to \([a_{ij}]\). Moreover, we obtain the results herein by MATLAB R2021a.

Step 2.:

The column matrix \(\left[ ^{\alpha _{i1}}_{\beta _{i1}}\right] \) is as follows:

$$\begin{aligned} \left[ ^{\alpha _{i1}}_{\beta _{i1}}\right] = \left[ \begin{array}{llll}{^{[0.2061,0.5573]}_{[0.0143,0.1151]}} \,\,\, {^{[0.3256,0.6280]}_{[0.0454,0.1309]}} \,\,\, {^{[0.2769,0.6317]}_{[0.0088,0.0977]}} \,\,\, {^{[0.3142,0.6629]}_{[0.0131,0.1078]}} \,\,\ {^{[0.3708,0.7197]}_{[0.0044,0.0730]}} \,\,\, {^{[0.3747,0.7238]}_{[0.0058,0.0774]}} \,\,\, {^{[0.3805,0.7283]}_{[0.0029,0.0700]}}\end{array} \right] ^{T} \end{aligned}$$

To exemplify, \(\alpha _{11}\) and \(\beta _{11}\) are calculated as follows:

$$\begin{aligned} \begin{array}{rllll} \alpha _{11}&{}=\frac{1}{\lambda } \sum _{j=1}^{9} \alpha ^{a}_{0j} \alpha ^{a}_{1j} \\ &{} =\frac{1}{4.5}\left( \alpha ^{a}_{01} \alpha ^{a}_{11} + \alpha ^{a}_{02} \alpha ^{a}_{12} + \alpha ^{a}_{03} \alpha ^{a}_{13} + \alpha ^{a}_{04} \alpha ^{a}_{14} + \alpha ^{a}_{05} \alpha ^{a}_{15} + \alpha ^{a}_{06} \alpha ^{a}_{16} + \alpha ^{a}_{07} \alpha ^{a}_{17} + \alpha ^{a}_{08} \alpha ^{a}_{18} + \alpha ^{a}_{09} \alpha ^{a}_{19}\right) \\ &{} = \frac{1}{4.5}\left( [0,0.01] \cdot [0.9392,0.9666] + [0,0.05] \cdot [0.8872,0.9368]\right. \\ &{} \quad \left. + [0,0.1] \cdot [0.8145,0.8948] + [0.05,0.35] \cdot [0.7330,0.8465] + [0.2,0.45] \cdot [0.6392,0.7873]\right. \\ &{} \quad + [0.25,0.5] \cdot [0.5210,0.7135] + [0.8,0.85] \cdot [0.3982,0.6263] + [0.85,0.9] \cdot [0.2732,0.5243] \\ &{} \quad \left. + [0.9,0.95] \cdot [0.0909,0.3687]\right) \\ &{} = [0.2061,0.5573] \end{array} \end{aligned}$$

and

$$\begin{aligned} \begin{array}{rl} \beta _{11}=&{}\frac{1}{\lambda } \sum _{j=1}^{9} \beta ^{a}_{0j} \beta ^{a}_{1j} \\ =&{}\frac{1}{4.5}\left( \beta ^{a}_{01} \beta ^{a}_{11} + \beta ^{a}_{02} \beta ^{a}_{12} + \beta ^{a}_{03} \beta ^{a}_{13} + \beta ^{a}_{04} \beta ^{a}_{14} + \beta ^{a}_{05} \beta ^{a}_{15} + \beta ^{a}_{06} \beta ^{a}_{16} + \beta ^{a}_{07} \beta ^{a}_{17} + \beta ^{a}_{08} \beta ^{a}_{18} + \beta ^{a}_{09} \beta ^{a}_{19}\right) \\ = &{} \frac{1}{4.5}\left( [0.9,0.95] \cdot [0.0060,0.0334] + [0.85,0.9] \cdot [0.0135,0.0632]+ [0.8,0.85] \cdot [0.0248,0.1052]\right. \\ &{} \quad + [0.25,0.5] \cdot [0.0399,0.1535] + [0.2,0.45] \cdot [0.0646,0.2127] + [0.05,0.35] \cdot [0.0940,0.2865] \\ &{} \quad + [0,0.1] \cdot [0.1456,0.3737] + [0,0.05] \cdot [0.2245,0.4757] \left. + [0,0.01] \cdot [0.3535,0.6313]\right) \\ =&{} [0.0143,0.1151] \end{array} \end{aligned}$$

such that

$$\begin{aligned} \begin{array}{rl} \lambda =&{}\frac{1}{2} \sum _{j=1}^{9} \left( 1 + \frac{(\alpha ^{a}_{0j})^{-} + (\alpha ^{a}_{0j})^{+}}{2} - \frac{(\beta ^{a}_{0j})^{-} + (\beta ^{a}_{0j})^{+}}{2}\right) \\ =&{}\frac{1}{2} \left( \left( 1 + \frac{(\alpha ^{a}_{01})^{-} + (\alpha ^{a}_{01})^{+}}{2} - \frac{(\beta ^{a}_{01})^{-} + (\beta ^{a}_{01})^{+}}{2}\right) + \left( 1 + \frac{(\alpha ^{a}_{02})^{-} + (\alpha ^{a}_{02})^{+}}{2} - \frac{(\beta ^{a}_{02})^{-} + (\beta ^{a}_{02})^{+}}{2}\right) \right. \\ &{} \left. \quad + \left( 1 + \frac{(\alpha ^{a}_{03})^{-} + (\alpha ^{a}_{03})^{+}}{2} - \frac{(\beta ^{a}_{03})^{-} + (\beta ^{a}_{03})^{+}}{2}\right) +\left( 1 + \frac{(\alpha ^{a}_{04})^{-} + (\alpha ^{a}_{04})^{+}}{2} - \frac{(\beta ^{a}_{04})^{-} + (\beta ^{a}_{04})^{+}}{2}\right) \right. \\ &{} \left. \quad + \left( 1 + \frac{(\alpha ^{a}_{05})^{-} + (\alpha ^{a}_{05})^{+}}{2} - \frac{(\beta ^{a}_{05})^{-} + (\beta ^{a}_{05})^{+}}{2}\right) +\left( 1 + \frac{(\alpha ^{a}_{06})^{-} + (\alpha ^{a}_{06})^{+}}{2} - \frac{(\beta ^{a}_{06})^{-} + (\beta ^{a}_{06})^{+}}{2}\right) \right. \\ &{} \left. \quad + \left( 1 + \frac{(\alpha ^{a}_{07})^{-} + (\alpha ^{a}_{07})^{+}}{2} - \frac{(\beta ^{a}_{07})^{-} + (\beta ^{a}_{07})^{+}}{2}\right) + \left( 1 + \frac{(\alpha ^{a}_{08})^{-} + (\alpha ^{a}_{08})^{+}}{2} - \frac{(\beta ^{a}_{08})^{-} + (\beta ^{a}_{08})^{+}}{2}\right) \right. \\ &{} \left. \quad + \left( 1 + \frac{(\alpha ^{a}_{09})^{-} + (\alpha ^{a}_{09})^{+}}{2} - \frac{(\beta ^{a}_{09})^{-} + (\beta ^{a}_{09})^{+}}{2}\right) \right) \\ =&{}\frac{1}{2} \left[ \left( 1 + \frac{0 + 0.01}{2} - \frac{0.9 + 0.95}{2}\right) + \left( 1 + \frac{0 + 0.05}{2} - \frac{0.85 + 0.9}{2}\right) + \left( 1 + \frac{0 + 0.1}{2} - \frac{0.8 + 0.85}{2}\right) \right. \\ &{} \left. \quad + \left( 1 + \frac{0.05 + 0.35}{2} - \frac{0.25 + 0.5}{2}\right) + \left( 1 + \frac{0.2 + 0.45}{2} - \frac{0.2 + 0.45}{2}\right) + \left( 1 + \frac{0.25 + 0.5}{2} - \frac{0.05 + 0.35}{2}\right) \right. \\ &{} \left. \quad + \left( 1 + \frac{0.8 + 0.85}{2} - \frac{0 + 0.1}{2}\right) + \left( 1 + \frac{0.85 +0.9}{2} - \frac{0 + 0.05}{2}\right) + \left( 1 + \frac{0.9 + 0.95}{2} - \frac{0 + 0.01}{2}\right) \right] \\ =&{} 4.5 \end{array} \end{aligned}$$
Step 3.:

The score matrix is as follows:

$$\begin{aligned} \begin{array}{rl} [s_{i1}]=&{} \left[ {[0.0909,0.5430]} \,\,\,\, {[0.1946,0.5826]} \,\,\,\, {[0.1792,0.6229]} \,\,\,\, {[0.2064,0.6498]} \right. \\ {} &{} \left. \,\,\,\, {[0.2977,0.7152]} \,\,\,\, {[0.2974,0.7181]} \,\,\,\, {[0.3105,0.7254]} \right] ^{T} \end{array} \end{aligned}$$

Here,

$$\begin{aligned} s_{11} = \alpha _{11}-\beta _{11} = [0.2061,0.5573]- [0.0143,0.1151] = [0.0909,0.5430] \end{aligned}$$
Step 4.:

The decision set is as follows:

$$\begin{aligned} \begin{array}{llll} \left\{ ^{[0.2228,0.7765]}\text {BPDF}, ^{[0.3498,0.8251]}\text {MDBUTMF}, ^{[0.3309,0.8744]}\text {DBAIN}, \right. \\ \,\,\,^{[0.3642,0.9074]}\text {NAFSMF}, \left. ^{[0.4761,0.9875]}\text {DAMF}, ^{[0.4757,0.9910]}\text {AWMF}, ^{[0.4917,1]}\text {ARmF}\right\} \end{array} \end{aligned}$$

Here,

$$\begin{aligned} d(u_1)= & {} \left[ \frac{s_{11}^- + |\min \limits _{i}s_{i1}^-|}{\max \limits _{i}s_{i1}^+ + |\min \limits _{i}s_{i1}^-|},\frac{s_{11}^+ + |\min \limits _{i}s_{i1}^-|}{\max \limits _{i}s_{i1}^+ + |\min \limits _{i}s_{i1}^-|}\right] \\= & {} \left[ \frac{0.0909 + |0.0909|}{0.7254 + |0.0909|},\frac{0.5430 + |0.0909|}{0.7254 + |0.0909|}\right] \\= & {} [0.2228,0.7765] \end{aligned}$$
Step 5.:

The ranking order

$$\begin{aligned} \text {BPDF} \prec \text {MDBUTMF} \prec \text {DBAIN} \prec \text {NAFSMF} \prec \text {DAMF} \prec \text {AWMF} \prec \text {ARmF} \end{aligned}$$

is valid. Therefore, the performance ranking of the filters shows that ARmF outperforms the other filters.

Thirdly, we consider 40 test images in the TESTIMAGES database (Asuni and Giachetti 2014), i.e. “Almonds”, “Apples”, “Balloons”, “Bananas”, “Billiard Balls 1”, “Billiard Balls 2”, “Building”, “Cards 1”, “Cards 2”, “Carrots”, “Chairs”, “Clips”, “Coins”, “Cushions”, “Duck”, “Fence”, “Flowers”, “Garden Table”, “Guitar Bridge”, “Guitar Fret”, “Guitar Head”, “Keyboard 1”, “Keyboard 2”, “Lion”, “Multimeter”, “Pencils 1”, “Pencils 2”, “Pillar”, “Plastic”, “Roof”, “Scarf”, “Screws”, “Snails”, “Socks”, “Sweets”, “Tomatoes 1”, “Tomatoes 2”, “Tools 1”, “Tools 2”, and “Wood Game”. To this end, we present the results of the aforesaid filters by SSIM for the images at noise densities ranging from \(10\%\) to \(90\%\), in Tables 5, 6, 7, 8, 9, 10, and 11, respectively. Moreover, we obtain the results herein by MATLAB R2021a.

Table 5 SSIM results of the filters for almonds, apples, balloons, and bananas images
Table 6 SSIM results of the filters for billiard balls 1, billiard balls 2, building, cards 1, cards 2, carrots, and chairs images
Table 7 SSIM results of the filters for clips, coins, cushions, duck, fence, flowers, and garden table images
Table 8 SSIM results of the filters for guitar bridge, guitar fret, guitar head, keyboard 1, keyboard 2, lion, and multimeter images
Table 9 SSIM results of the filters for pencils 1, pencils 2, pillar, plastic, roof, scarf, and screws images
Table 10 SSIM results of the filters for snails, socks, sweets, tomatoes 1, tomatoes 2, tools 1, and tools 2 images
Table 11 SSIM results of the filters for wood game image

For the problem, let \((\mu ^{ij}_{t})\) be ordered-quadragintuple such that \(\mu ^{ij}_{t}\) corresponds to the SSIM results in Tables 5, 6, 7, 8, 9, 10, and 11, obtained by \(t^{th}\) image for \(i^{th}\) filter at \(j^{th}\) noise density. Here, since \({\nu ^{ij}_{t}}=1-{\mu ^{ij}_{t}}\) and \({\pi ^{ij}_{t}}=0\) such that \(i \in I_{7}\), \(j \in I_{9}\), and \(t \in I_{40}\), then for d-matrix \([b_{ij}]\),

$$\begin{aligned} \alpha ^b_{ij}:=\left[ \frac{\min \limits _{t}{\mu ^{ij}_{t}}}{\max \limits _{t}{\mu ^{ij}_{t}} + \max \limits _{t}\{1-{\mu ^{ij}_{t}}\}} , \frac{\max \limits _{t}{\mu ^{ij}_{t}}}{\max \limits _{t}{\mu ^{ij}_{t}} + \max \limits _{t}\{1-{\mu ^{ij}_{t}}\}} \right] \end{aligned}$$

and

$$\begin{aligned} \beta ^b_{ij}:=\left[ \frac{\min \limits _{t}\{1-{\mu ^{ij}_{t}}\}}{\max \limits _{t}{\mu ^{ij}_{t}} + \max \limits _{t}\{1-{\mu ^{ij}_{t}}\}} , \frac{\max \limits _{t}\{1-{\mu ^{ij}_{t}}\}}{\max \limits _{t}{\mu ^{ij}_{t}} + \max \limits _{t}\{1-{\mu ^{ij}_{t}}\}} \right] \end{aligned}$$

For example, the ordered-quadragintuple

$$\begin{aligned} \begin{array}{rl}(\mu ^{11}_{t}) =&{}(0.9815,0.9931,0.9935,0.9873,0.9953,0.9901,0.9821,0.9814,0.9894,0.9866,\\ &{}\,\,\, 0.9970,0.9869,0.9782,0.9937,0.9956,0.9840,0.9841,0.9751,0.9788,0.9874, \\ &{}\,\,\, 0.9776,0.9845,0.9782,0.9900,0.9760,0.9824,0.9822,0.9861,0.9735,0.9884, \\ &{}\,\,\, 0.9816,0.9832,0.9913,0.9688,0.9895,0.9924,0.9951,0.9824,0.9844,0.9915) \end{array} \end{aligned}$$

indicates SSIM results of BPDF for 40 test images at noise density \(10\%\). Since

$$\begin{aligned}&\alpha ^b_{11}=\left[ \frac{\min \limits _{t}{\mu ^{11}_{t}}}{\max \limits _{t}{\mu ^{11}_{t}} + \max \limits _{t}\{1-{\mu ^{11}_{t}}\}} , \frac{\max \limits _{t}{\mu ^{11}_{t}}}{\max \limits _{t}{\mu ^{11}_{t}} + \max \limits _{t}\{1-{\mu ^{11}_{t}}\}}\right] \\&\qquad = \left[ \frac{0.9688}{0.9970 + 0.0312} , \frac{0.9970}{0.9970 + 0.0312}\right] = [0.9422,0.9696] \end{aligned}$$

and

$$\begin{aligned}&\beta ^b_{11}=\left[ \frac{\min \limits _{t}\{1-{\mu ^{11}_{t}}\}}{\max \limits _{t}{\mu ^{11}_{t}} + \max \limits _{t}\{1-{\mu ^{11}_{t}}\}} , \frac{\max \limits _{t}\{1-{\mu ^{11}_{t}}\}}{\max \limits _{t}{\mu ^{11}_{t}} + \max \limits _{t}\{1-{\mu ^{11}_{t}}\}}\right] \\&\qquad = \left[ \frac{0.003}{0.9970 + 0.0312} , \frac{0.0312}{0.9970 + 0.0312}\right] = [0.0029,0.0304] \end{aligned}$$

then \(b_{11}={^{[0.9422,0.9696]}_{[0.0029,0.0304]}}\). Here, [0.9422, 0.9696] denotes that the success of BPDF on image denoising (i.e. correcting corrupted pixels) at noise density \(10\%\) occurs approximately between \(94\%\) and \(96\%\). Moreover, [0.0029, 0.0304] means that the rate of BPDF’s failure in image denoising at the same noise density ranges from approximately \(0\%\) to \(3\%\). Similarly, the all rows of the d-matrix \([b_{ij}]\) but the zero-indexed row can be obtained. Besides, suppose that the noise-removal performances of the filters are more significant in high noise densities, in which noisy pixels outnumber uncorrupted pixels, then performance-based success would be more important in the presence of high noise densities than of others. For example, let

$$\begin{aligned} {[}b_{0j}]= \left[ {^{[0,0.01]}_{[0.9,0.95]}} \,\,\, {^{[0,0.05]}_{[0.85,0.9]}} \,\,\, {^{[0,0.1]}_{[0.8,0.85]}} \,\,\, {^{[0.05,0.35]}_{[0.25,0.5]}} \,\,\, {^{[0.2,0.45]}_{[0.2,0.45]}} \,\,\, {^{[0.25,0.5]}_{[0.05,0.35]}} \,\,\, {^{[0.8,0.85]}_{[0,0.1]}} \,\,\, {^{[0.85,0.9]}_{[0,0.05]}} \,\,\, {^{[0.9,0.95]}_{[0,0.01]}} \right] \end{aligned}$$

Thus, the d-matrix \([b_{ij}]\), modelling the SSIM values provided in Tables 5, 6, 7, 8, 9, 10, and 11, is as follows:

$$\begin{aligned}&\begin{array}{rllll} [b_{ij}]=&{}\left[ \begin{array}{ccccc} ^{[0,0.01]}_{[0.9,0.95]} &{} ^{[0,0.05]}_{[0.85,0.9]} &{} ^{[0,0.1]}_{[0.8,0.85]} &{} ^{[0.05,0.35]}_{[0.25,0.5]} &{} ^{[0.2,0.45]}_{[ 0.2,0.45]} \\ ^{[0.9422,0.9696]}_{[0.0029,0.0304]} \,\,\, &{} ^{[0.8771,0.9348]}_{[0.0074,0.0652]} \,\,\, &{} ^{[0.8040,0.8943]}_{[0.0154,0.1057]} \,\,\, &{} ^{[0.7263,0.8506]}_{[0.0250,0.1494]} \,\,\, &{} ^{[0.6424,0.7997]}_{[0.0430,0.2003]} \\ ^{[0.9382,0.9677]}_{[0.0027,0.0323]} \,\,\, &{} ^{[0.8538,0.9081]}_{[0.0376,0.0919]} \,\,\, &{} ^{[0.5711,0.7452]}_{[0.0806,0.2548]} \,\,\, &{} ^{[0.5393,0.7188]}_{[0.1017,0.2812]} \,\,\, &{} ^{[0.7090,0.8063]}_{[0.0965,0.1937]} \\ ^{[0.9488,0.9735]}_{[0.0019,0.0265]} \,\,\, &{} ^{[0.8965,0.9460]}_{[0.0044,0.0540]} \,\,\, &{} ^{[0.8340,0.9127]}_{[0.0085,0.0873]} \,\,\, &{} ^{[0.7636,0.8747]}_{[0.0143,0.1253]} \,\,\, &{} ^{[0.6865,0.8309]}_{[0.0248,0.1691]} \\ ^{[0.9272,0.9613]}_{[0.0045,0.0387]} \,\,\, &{} ^{[0.8780,0.9346]}_{[0.0087,0.0654]} \,\,\, &{} ^{[0.8192,0.9036]}_{[0.0121,0.0964]} \,\,\, &{} ^{[0.7564,0.8712]}_{[0.0140,0.1288]} \,\,\, &{} ^{[0.6936,0.8381]}_{[0.0174,0.1619]} \\ ^{[0.9569,0.9779]}_{[0.0011,0.0221]} \,\,\, &{} ^{[0.9120,0.9546]}_{[0.0027,0.0454]} \,\,\, &{} ^{[0.8616,0.9284]}_{[0.0048,0.0716]} \,\,\, &{} ^{[0.8087,0.9006]}_{[0.0075,0.0994]} \,\,\, &{} ^{[0.7515,0.8703]}_{[0.0108,0.1297]} \\ ^{[0.9336,0.9645]}_{[0.0046,0.0355]} \,\,\, &{} ^{[0.8990,0.9471]}_{[0.0048,0.0529]} \,\,\, &{} ^{[0.8582,0.9262]}_{[0.0057,0.0738]} \,\,\, &{} ^{[0.8130,0.9028]}_{[0.0073,0.0972]} \,\,\, &{} ^{[0.7620,0.8761]}_{[0.0099,0.1239]} \\ ^{[0.9648,0.9818]}_{[0.0012,0.0182]} \,\,\, &{} ^{[0.9276,0.9626]}_{[0.0025,0.0374]} \,\,\, &{} ^{[0.8852,0.9406]}_{[0.0040,0.0594]} \,\,\, &{} ^{[0.8381,0.9161]}_{[0.0059,0.0839]} \,\,\, &{} ^{[0.7848,0.8880]}_{[0.0088,0.1120]} \\ \end{array}\right. \\ \end{array} \\&\qquad \qquad \qquad \begin{array}{rlll} &{}\left. \begin{array}{cccc} ^{[0.25,0.5]}_{[0.05,0.35]} &{} ^{[0.8,0.85]}_{[0,0.1]} &{} ^{[0.85,0.9]}_{[0,0.05]} &{} ^{[0.9,0.95]}_{[0,0.01]} \\ ^{[0.5140,0.7240]}_{[0.0660,0.2760]} \,\,\, &{} ^{[0.3632,0.6306]}_{[0.1019,0.3694]} \,\,\, &{} ^{[0.2218,0.5204]}_{[0.1810,0.4796]} \,\,\, &{} ^{[0.0602,0.3613]}_{[0.3377,0.6387]} \\ ^{[0.6329,0.8015]}_{[0.0299,0.1985]} \,\,\, &{} ^{[0.5613,0.7681]}_{[0.0250,0.2319]} \,\,\, &{} ^{[0.4893,0.7034]}_{[0.0826,0.2966]} \,\,\, &{} ^{[0.2977,0.4583]}_{[0.3811,0.5417]} \\ ^{[0.5934,0.7781]}_{[0.0373,0.2219]} \,\,\, &{} ^{[0.4946,0.7170]}_{[0.0606,0.2830]} \,\,\, &{} ^{[0.3514,0.6284]}_{[0.0947,0.3716]} \,\,\, &{} ^{[0.2074,0.5295]}_{[0.1484,0.4705]} \\ ^{[0.6232,0.8009]}_{[0.0214,0.1991]} \,\,\, &{} ^{[0.5533,0.7614]}_{[0.0305,0.2386]} \,\,\, &{} ^{[0.4783,0.7147]}_{[0.0489,0.2853]} \,\,\, &{} ^{[0.3789,0.6262]}_{[0.1264,0.3738]} \\ ^{[0.6823,0.8338]}_{[0.0147,0.1662]} \,\,\, &{} ^{[0.6080,0.7942]}_{[0.0197,0.2058]} \,\,\, &{} ^{[0.5217,0.7463]}_{[0.0290,0.2537]} \,\,\, &{} ^{[0.4017,0.6762]}_{[0.0492,0.3238]} \\ ^{[0.6951,0.8408]}_{[0.0134,0.1592]} \,\,\, &{} ^{[0.6199,0.8008]}_{[0.0182,0.1992]} \,\,\, &{} ^{[0.5305,0.7515]}_{[0.0274,0.2485]} \,\,\, &{} ^{[0.4071,0.6814]}_{[0.0443,0.3186]} \\ ^{[0.7145,0.8510]}_{[0.0125,0.1490]} \,\,\, &{} ^{[0.6351,0.8088]}_{[0.0175,0.1912]} \,\,\, &{} ^{[0.5406,0.7568]}_{[0.0269,0.2432]} \,\,\, &{} ^{[0.4120,0.6840]}_{[0.0440,0.3160]} \\ \end{array}\right] \end{array} \end{aligned}$$

Finally, we apply the configured method to \([b_{ij}]\). Moreover, we obtain the results herein by MATLAB R2021a.

Step 2.:

The column matrix \(\left[ ^{\alpha _{i1}}_{\beta _{i1}}\right] \) is as follows:

$$\begin{aligned} \left[ ^{\alpha _{i1}}_{\beta _{i1}}\right] = \left[ \begin{array}{llll}{^{[0.1837,0.5585]}_{[0.0087,0.1125]}} \,\,\, {^{[0.3244,0.6369]}_{[0.0323,0.1490]}} \,\,\, {^{[0.2678,0.6434]}_{[0.0050,0.0924]}}\\ {^{[0.3383,0.6921]}_{[0.0065,0.0948]}}\\ {^{[0.3673,0.7252]}_{[0.0026,0.0723]}} \,\,\, {^{[0.3733,0.7299]}_{[0.0038,0.0755]}} \,\,\, {^{[0.3813,0.7369]}_{[0.0023,0.0623]}} \end{array}\right] ^{T} \end{aligned}$$
Step 3.:

The score matrix is as follows:

$$\begin{aligned} \begin{array}{rl} [s_{i1}]=&{} \left[ {[0.0712,0.5497]} \,\,\,\, {[0.1754,0.6047]} \,\,\,\, {[0.1753,0.6384]} \,\,\,\, {[0.2436,0.6856]} \right. \\ {} &{} \left. \,\,\,\, {[0.2950,0.7225]} \,\,\,\, {[0.2979,0.7261]} \,\,\,\, {[0.3190,0.7347]} \right] ^{T} \end{array} \end{aligned}$$
Step 4.:

The decision set is as follows:

$$\begin{aligned} \begin{array}{l} \left\{ ^{[0.1768,0.7705]}\text {BPDF}, ^{[0.3060,0.8387]}\text {MDBUTMF}, ^{[0.3059,0.8805]}\text {DBAIN}, ^{[0.3906,0.9392]}\text {NAFSMF}, \right. \\ \,\,\,\left. ^{[0.4544,0.9849]}\text {DAMF}, ^{[0.4580,0.9894]}\text {AWMF}, ^{[0.4842,1]}\text {ARmF}\right\} \end{array} \end{aligned}$$
Step 5.:

The ranking order

$$\begin{aligned} \text {BPDF} \prec \text {MDBUTMF} \prec \text {DBAIN} \prec \text {NAFSMF} \prec \text {DAMF} \prec \text {AWMF} \prec \text {ARmF} \end{aligned}$$

is valid. Therefore, the performance ranking of the filters shows that ARmF outperforms the other filters.

6 Comparative analysis

In this section, we compare the configured method with five SDM methods, namely iMBR01, iMRB02\((I_9)\), iCCE10, iCCE11, and iPEM, provided in (Arslan et al. 2021). For this reason, first, Table 12 presents the filters’ ranking orders provided in (Arslan et al. 2021) when the methods are applied to ifpifs-matrix \([a_{ij}]\) (Arslan et al. 2021) obtained using the results in Tables 1, 2, 3, and 4. Second, we construct ifpifs-matrix \([c_{ij}]\) using the membership and non-membership functions in (Arslan et al. 2021) and the filters’ noise-removal performance results provided in Tables 56, 7, 8, 9, 10, and 11. We then apply five SDM methods to this ifpifs-matrix.

$$\begin{aligned} {[}c_{ij}]=\left[ \begin{array}{ccccccccc} ^{0.05}_{0.9} &{} ^{0.15}_{0.8} &{} ^{0.25}_{0.7} &{} ^{0.35}_{0.6} &{} ^{0.5}_{0.5} &{} ^{0.65}_{0.3} &{} ^{0.75}_{0.2} &{} ^{0.85}_{0.1} &{} ^{0.9}_{0.05} \\ ^{0.9688}_{0.0030} \,\,\,\, &{} ^{0.9308}_{0.0079} \,\,\,\, &{} ^{0.8838}_{0.0169} \,\,\,\, &{} ^{0.8294}_{0.0286} \,\,\,\, &{} ^{0.7623}_{0.0510} \,\,\,\, &{} ^{0.6506}_{0.0836} \,\,\,\, &{} ^{0.4958}_{0.1391} \,\,\,\, &{} ^{0.3162}_{0.2581} \,\,\,\, &{} ^{0.0861}_{0.4831} \\ ^{0.9668}_{0.0028} \,\,\,\, &{} ^{0.9028}_{0.0397} \,\,\,\, &{} ^{0.6915}_{0.0976} \,\,\,\, &{} ^{0.6573}_{0.1239} \,\,\,\, &{} ^{0.7854}_{0.1069} \,\,\,\, &{} ^{0.7613}_{0.0359} \,\,\,\, &{} ^{0.7076}_{0.0316} \,\,\,\, &{} ^{0.6226}_{0.1051} \,\,\,\, &{} ^{0.3547}_{0.4540} \\ ^{0.9728}_{0.0019} \,\,\,\, &{} ^{0.9432}_{0.0046} \,\,\,\, &{} ^{0.9053}_{0.0092} \,\,\,\, &{} ^{0.8590}_{0.0161} \,\,\,\, &{} ^{0.8023}_{0.0290} \,\,\,\, &{} ^{0.7278}_{0.0457} \,\,\,\, &{} ^{0.6361}_{0.0780} \,\,\,\, &{} ^{0.4860}_{0.1310} \,\,\,\, &{} ^{0.3059}_{0.2189} \\ ^{0.9600}_{0.0047} \,\,\,\, &{} ^{0.9307}_{0.0092} \,\,\,\, &{} ^{0.8947}_{0.0132} \,\,\,\, &{} ^{0.8545}_{0.0159} \,\,\,\, &{} ^{0.8107}_{0.0203} \,\,\,\, &{} ^{0.7579}_{0.0260} \,\,\,\, &{} ^{0.6987}_{0.0385} \,\,\,\, &{} ^{0.6264}_{0.0641} \,\,\,\, &{} ^{0.5034}_{0.1679} \\ ^{0.9774}_{0.0011} \,\,\,\, &{} ^{0.9526}_{0.0028} \,\,\,\, &{} ^{0.9232}_{0.0052} \,\,\,\, &{} ^{0.8905}_{0.0083} \,\,\,\, &{} ^{0.8528}_{0.0123} \,\,\,\, &{} ^{0.8041}_{0.0174} \,\,\,\, &{} ^{0.7471}_{0.0242} \,\,\,\, &{} ^{0.6729}_{0.0375} \,\,\,\, &{} ^{0.5537}_{0.0678} \\ ^{0.9634}_{0.0047} \,\,\,\, &{} ^{0.9444}_{0.0050} \,\,\,\, &{} ^{0.9209}_{0.0061} \,\,\,\, &{} ^{0.8932}_{0.0081} \,\,\,\, &{} ^{0.8601}_{0.0111} \,\,\,\, &{} ^{0.8137}_{0.0157} \,\,\,\, &{} ^{0.7568}_{0.0223} \,\,\,\, &{} ^{0.6810}_{0.0352} \,\,\,\, &{} ^{0.5610}_{0.0611} \\ ^{0.9815}_{0.0012} \,\,\,\, &{} ^{0.9612}_{0.0026} \,\,\,\, &{} ^{0.9371}_{0.0042} \,\,\,\, &{} ^{0.9090}_{0.0064} \,\,\,\, &{} ^{0.8751}_{0.0098} \,\,\,\, &{} ^{0.8274}_{0.0145} \,\,\,\, &{} ^{0.7686}_{0.0212} \,\,\,\, &{} ^{0.6897}_{0.0343} \,\,\,\, &{} ^{0.5659}_{0.0604} \\ \end{array}\right] \end{aligned}$$
Table 12 Ranking orders generated by five SDM methods (Arslan et al. 2021)
Table 13 Decision sets when five SDM methods are applied to \([c_{ij}]\)
Table 14 Noise removal filters’ ranking orders when five SDM methods are applied to \([c_{ij}]\)

In Tables 13 and 14, we present the decision sets and the noise-removal filters’ ranking orders when five SDM methods are applied to \([c_{ij}]\), respectively. We reveal in Section 5 that the configured method produces the same ranking orders for the filters’ SSIM results obtained with 20 traditional test images and 40 test images at nine noise densities. Thus, the configured method confirms the ranking order provided in (Aydın and Enginoğlu 2021a) and those of iCCE10 and iCCE11 in Tables 12 and 14. On the other hand, although iPEM provides the same ranking order as iCCE10 and iCCE11 for 40 test images, iMBR01, iMRB02\((I_9)\), and iPEM generate different ranking orders for 20 traditional test images. Consequently, we observe that the configured method is more consistent than iMBR01, iMRB02\((I_9)\), and iPEM. Thus, these comments exhibit that the SDM method constructed with d-matrices is more advantageous in dealing with problems involving multiple measurement results.

7 Conclusion

In this paper, we defined the concept of d-matrices. Furthermore, we introduced its basic operations and investigated some of their basic properties. We then configured the SDM method (Aydın and Enginoğlu 2021a) to operate it in d-matrices space. Moreover, we applied it to two d-matrices constructed with SSIM results of the known noise-removal filters for 40 test images, provided in the TESTIMAGES database (Asuni and Giachetti 2014), and 20 traditional test images. This application results confirmed the one available in Aydın and Enginoğlu (2021a). Thus, the configured method enabled problems containing a large number of data to be processed on a computer. In addition, we applied five state-of-the-art SDM methods constructed with ifpifs-matrices to the same problem and compared the ranking performance of the configured method with those of the five methods.

The results in the present study manifested that the configured method was successfully applied to a decision-making problem containing ivif uncertainties. Therefore, further research should be focussed on developing effective SDM methods based on group decision making using AND/OR/ANDNOT/ORNOT-products of d-matrices. Moreover, it is possible to render the SDM methods constructed with fpfs-matrices (Enginoğlu and Memiş 2018d, 2020; Enginoğlu et al. 2018a, b, 2019c, d, 2021a) and ifpifs-matrices (Enginoğlu and Arslan 2020) operable in d-matrices space. Furthermore, the membership and non-membership functions used to obtain an ivif-value from multiple intuitionistic fuzzy values can be defined in a different way and used to construct a d-matrix in the first step of the configured method. Thus, these new methods can be applied to the problem featured in the current study and the results of this process can be compared with those herein. In addition, it is necessary and worthwhile to conduct theoretical and applied studies on varied topics, such as distance and similarity measures, by making use of the d-matrices. Researchers can also conduct studies on the various hybrid versions of soft sets and the other generalisations of fuzzy sets, such as hesitant fuzzy sets (Torra 2010), linear Diophantine fuzzy sets (Riaz and Hashmi 2019), spherical linear Diophantine fuzzy sets (Riaz et al. 2021), and picture fuzzy sets (Cuong 2014; Memiş 2021), and their matrices.