1 Introduction

Cognitive diagnosis models (CDMs), also called diagnostic classification models (DCMs), are useful statistical tools in cognitive diagnosis assessment, which aims to achieve a fine-grained decision on an individual’s latent attributes, such as skills, knowledge, personality traits, or psychological disorders, based on his or her observed responses to some designed diagnostic items. The CDMs fall into the more general regime of restricted latent class models in the statistics literature and model the complex relationships among the items, the latent attributes and the item responses for a set of items and a sample of respondents. Various CDMs have been developed with different cognitive diagnosis assumptions, among which the Deterministic Input Noisy output “And” gate model (DINA; Junker and Sijtsma, 2001) is a popular one and serves as a basic submodel for more general CDMs such as the general diagnostic model (von Davier, 2005), the log linear CDM (LCDM; Henson et al., 2009), and the generalized DINA model (GDINA; de la Torre, 2011).

To achieve reliable and valid diagnostic assessment, a fundamental issue is to ensure that the CDMs applied in the cognitive diagnosis are statistically identifiable, which is a necessity for consistent estimation of the model parameters of interest and valid statistical inferences. The study of identifiability in statistics and psychometrics has a long history (Koopmans and Reiersøl, 1950; McHugh, 1956; Rothenberg, 1971; Goodman, 1974; Gabrielsen, 1978). The identifiability issue of the CDMs has also long been a concern, as noted in the literature (DiBello et al., 1995; Maris and Bechger, 2009; Tatsuoka, 2009; DeCarlo, 2011; von Davier, 2014). In practice, however, there is often a tendency to overlook the issue due to the lack of easily verifiable identifiability conditions. Recently, there have been several studies on the identifiability of the CDMs, including the DINA model (e.g., Xu and Zhang, 2016) and general models (e.g., Xu, 2017; Xu and Shang, 2017; Fang et al., 2017).

However, the existing works mostly focus on developing sufficient conditions for the model identifiability, which might impose stronger than needed or sometimes even impractical constraints on designing identifiable cognitive diagnostic tests. It remains an open problem in the literature what would be the minimal requirement, i.e., the sufficient and necessary conditions, for the models to be identifiable. In particular, for the DINA model, Xu and Zhang (2016) proposed a set of sufficient conditions and a set of necessary conditions for the identifiability of the slipping, guessing and population proportion parameters. However, as pointed out by the authors, there is a gap between the two sets of conditions; see Xu and Zhang (2016) for examples and discussions.

This paper addresses this open problem by developing the sufficient and necessary condition for the identifiability of the DINA model. Furthermore, we show that the identifiability condition ensures the statistical consistency of the maximum likelihood estimators of the model parameters. The proposed condition not only guarantees the identifiability, but also gives the minimal requirement that the DINA model needs to meet in order to be identifiable. The identifiability result can be directly applied to the DINO model (Templin and Henson, 2006) through the duality of the DINA and DINO models. For general CDMs such as the LCDM and GDINA models, since the DINA model can be considered as a submodel of them, the proposed condition also serves as a necessary requirement. From a practical perspective, the sufficient and necessary condition only depends on the Q-matrix structure and such easily checkable condition would provide a practical guideline for designing statistically valid and estimable cognitive tests.

The rest of the paper is organized as follows. Section 2 introduces the basic model setup and the definition of identifiability. Section 3 states the identifiability results and includes several illustrating examples. Section 4 gives a further discussion, and the Appendix provides the proof of the main results.

2 Model Setup and Identifiability

We consider the setting of a cognitive diagnosis test with binary responses. The test contains J items to measure K unobserved latent attributes. The latent attributes are assumed to be binary for diagnosis purpose, and a complete configuration of the K latent attributes is called an attribute profile, which is denoted by a K-dimensional binary vector \({\varvec{\alpha }}=(\alpha _1,\ldots ,\alpha _K)^\top \), where \(\alpha _k\in \{0,1\}\) represents deficiency or mastery of the kth attribute. The underlying cognitive structure, i.e., the relationship between the items and the attributes, is described by the so-called Q-matrix, originally proposed by Tatsuoka (1983). A Q-matrix Q is a \(J\times K\) binary matrix with entries \(q_{j,k}\in \{0,1\}\) indicating the absence or presence of the dependence of the jth item on the kth attribute. The jth row vector \({\varvec{q}}_j\) of the Q-matrix, also called the \({\varvec{q}}\)-vector, corresponds to the attribute requirements of item j.

A subject’s attribute profile is assumed to follow a categorical distribution with population proportion parameters \({\varvec{p}}:= (p_{{\varvec{\alpha }}}:{\varvec{\alpha }}\in \{0,1\}^K)^\top \), where \(p_{{\varvec{\alpha }}}\) is the proportion of attribute profile \({\varvec{\alpha }}\) in the population and \({\varvec{p}}\) satisfies \(\sum _{{\varvec{\alpha }}\in \{0,1\}^K}p_{{\varvec{\alpha }}}=1\) and \(p_{{\varvec{\alpha }}}>0\) for any \({\varvec{\alpha }}\in \{0,1\}^K\). For an attribute profile \({\varvec{\alpha }}\) and a \({\varvec{q}}\)-vector \({\varvec{q}}_j\), we write \({\varvec{\alpha }}\succeq {\varvec{q}}_j\) if \({\varvec{\alpha }}\) masters all the required attributes of item j, i.e., \(\alpha _k \ge q_{j,k}\) for any \(k \in \{1, \ldots , K\}\), and write \({\varvec{\alpha }}\nsucceq {\varvec{q}}_j\) if there exists some k such that \(\alpha _k < q_{j,k}.\) Similarly, we define the operations \(\preceq \) and \(\npreceq \).

A subject provides a J-dimensional binary response vector \({\varvec{R}}=(R_1,\ldots ,R_J)^\top \in \{0,1\}^J\) to these J items. The DINA model assumes a conjunctive relationship among attributes, which means it is necessary to master all the attributes required by an item to be capable of providing a positive response to it. Moreover, mastering additional unnecessary attributes does not compensate for the lack of the necessary attributes. Specifically, for any item j and attribute profile \({\varvec{\alpha }}\), we define the binary ideal response \(\xi _{j,{\varvec{\alpha }}} = I({\varvec{\alpha }}\succeq {\varvec{q}}_j)\). If there is no uncertainty in the response, then a subject with attribute profile \({\varvec{\alpha }}\) will have response \(R_j=\xi _{j,{\varvec{\alpha }}} \) to item j. The uncertainty of the responses is incorporated at the item level, using slipping and guessing parameters. For each item j, the slipping parameter \(s_j := P(R_j=0\mid \xi _{j,{\varvec{\alpha }}} =1)\) denotes the probability of a subject giving a negative response despite mastering all the necessary skills, while the guessing parameter \(g_j := P(R_j=1\mid \xi _{j,{\varvec{\alpha }}} =0)\) denotes the probability of giving a positive response despite deficiency of some necessary skills.

Note that if some item j does not require any of the attributes, namely \({\varvec{q}}_j\) equals the zero vector \(\mathbf {0}\), then \(\xi _{j,{\varvec{\alpha }}}=1\) for all attribute profiles \({\varvec{\alpha }}\in \{0,1\}^K\). Therefore, in this special case, the guessing parameter is not needed in the specification of the DINA model. The DINA model item parameters then include slipping parameters \({\varvec{s}}= (s_1,\ldots ,s_J)^\top \) and guessing parameters \({\varvec{g}}^{-} = (g_j: \forall j \text{ such } \text{ that } {\varvec{q}}_j\ne \mathbf {0} )^\top \). We assume \(1-s_j>g_j\) for any item j with \({\varvec{q}}_j\ne \mathbf {0}\). For notational simplicity, in the following discussion we define the guessing parameter of any item with \({\varvec{q}}_j=\mathbf {0}\) to be a known value \(g_j\equiv 0\), and write \({\varvec{g}}=(g_1,\ldots , g_J)^\top \).

Conditional on the attribute profile \({\varvec{\alpha }}\), the DINA model further assumes a subject’s responses are independent. Therefore, the probability mass function of a subject’s response vector \({\varvec{R}}= (R_1,\ldots ,R_J)^\top \) is

$$\begin{aligned}&P({\varvec{R}}={\varvec{r}}\mid Q,{\varvec{s}},{\varvec{g}},{\varvec{p}}) \nonumber \\&\quad = \sum _{{\varvec{\alpha }}\in \{0,1\}^K}p_{{\varvec{\alpha }}} \prod _{j=1}^J (1-s_j)^{\xi _{j,{\varvec{\alpha }}} r_j} g_j^{(1-\xi _{j,{\varvec{\alpha }}})r_j} s_j^{\xi _{j,{\varvec{\alpha }}}(1-r_j)}(1-g_j)^{(1-\xi _{j,{\varvec{\alpha }}})(1-r_j)}, \end{aligned}$$
(1)

where \({\varvec{r}}= (r_1,\ldots ,r_J)^\top \in \{0,1\}^J\).

Suppose we have N independent subjects, indexed by \(i=1,\ldots , N\), in a cognitive diagnostic assessment. We denote their response vectors by \(\{{\varvec{R}}_i: i=1,\ldots ,N\}\), which are our observed data. The DINA model parameters that we aim to estimate from the response data are \(({\varvec{s}},{\varvec{g}},{\varvec{p}})\), based on which we can further evaluate the subjects’ attribute profiles from their “posterior distributions.” To consistently estimate \(({\varvec{s}},{\varvec{g}},{\varvec{p}})\), we need them to be identifiable. Following the definition in the statistics literature (e.g., Casella and Berger, 2002), we say a set of parameters in the parameter space B for a family of probability density (mass) functions \(\{f(~\cdot \mid \beta ):\beta \in B\}\) is identifiable if distinct values of \(\beta \) correspond to distinct \(f(~\cdot \mid \beta )\) functions, i.e., for any \(\beta \) there is no \(\tilde{\beta }\in B\backslash \{\beta \}\) such that \(f(~\cdot \mid \beta ) \equiv f(~\cdot \mid \tilde{\beta })\). In the context of the DINA model, we have the following definition.

Definition 1

We say the DINA model parameters are identifiable if there is no \((\bar{{\varvec{s}}},\bar{{\varvec{g}}},\bar{{\varvec{p}}})\ne ({\varvec{s}},{\varvec{g}},{\varvec{p}})\) such that

$$\begin{aligned} P({\varvec{R}}={\varvec{r}}\mid Q,{\varvec{s}},{\varvec{g}},{\varvec{p}}) = P({\varvec{R}}={\varvec{r}}\mid Q,\bar{{\varvec{s}}},\bar{{\varvec{g}}},\bar{{\varvec{p}}}) \text{ for } \text{ all } {\varvec{r}}\in \{0,1\}^J. \end{aligned}$$
(2)

Remark 1

Identifiability of latent class models is a well-established concept in the literature (e.g., McHugh, 1956; Goodman, 1974). Recent studies on the identifiability of the CDMs and the restricted latent class models include Liu et al. (2013), Chen et al. (2015), Xu and Zhang (2016), Xu (2017), and Xu and Shang (2017). However, as discussed in the introduction, most of them focus on developing sufficient conditions, while the sufficient and necessary conditions are still unknown.

3 Main Result

We first introduce the important concept of the completeness of a Q-matrix, which was first introduced in Chiu et al. (2009). A Q-matrix is said to be complete if it can differentiate all latent attribute profiles, in the sense that under the Q-matrix, different attribute profiles have different response distributions. In this study of the DINA model, completeness of the Q-matrix means that \(\{{\varvec{e}}_k^\top :k=1,\ldots ,K\}\subseteq \{{\varvec{q}}_j:j=1,\ldots ,J\}\), equivalently, for each attribute there is some item which requires that and solely requires that attribute. Up to some row permutation, a complete Q-matrix under the DINA model contains a \(K\times K\) identity matrix. Under the DINA model, completeness of the Q-matrix is necessary for identifiability of the population proportion parameters \({\varvec{p}}\) (Xu and Zhang, 2016).

Besides the completeness, an additional necessary condition for identifiability was also specified in Xu and Zhang (2016) that each attribute needs to be related with at least three items. For easy discussion, we summarize the set of necessary conditions in Xu and Zhang (2016) as follows.

Condition 1

  1. (i)

    The Q-matrix is complete under the DINA model and without loss of generality, we assume the Q-matrix takes the following form:

    (3)

    where \(\mathcal{I}_K\) denotes the \(K\times K\) identity matrix and \(Q^*\) is a \((J-K)\times K\) submatrix of Q.

  2. (ii)

    Each of the K attributes is required by at least three items.

Though necessary, Xu and Zhang (2016) recognized that Condition 1 is not sufficient. To establish identifiability, the authors also proposed a set of sufficient conditions, which, however, is not necessary. For instance, the Q-matrix in (4), which is given on page 633 in Xu and Zhang (2016), does not satisfy their sufficient condition but still gives an identifiable model.

(4)

In particular, their sufficient condition C4 requires that for each \(k\in \{1,\ldots , K\}\), there exist two subsets \(S_k^+\) and \(S_k^-\) of the items (not necessarily nonempty or disjoint) in \(Q^*\) such that \(S_k^+\) and \(S_k^-\) have attribute requirements that are identical except in the kth attribute, which is required by an item in \(S_k^+\) but not by any item in \(S_k^-\). However, the first attribute in (4) does not satisfy this condition. Examples of this kind of Q-matrices not satisfying their C4 but still identifiable are not rare and can be easily constructed as shown below in (5).

(5)

It has been an open problem in the literature what would be the minimal requirement of the Q-matrix for the model to be identifiable. This paper solves this problem and shows that Condition 1 together with the following Condition 2 are sufficient and necessary for the identifiability of the DINA model parameters.

Condition 2

Any two different columns of the submatrix \(Q^*\) in (3) are distinct.

We have the following identifiability result.

Theorem 1

(Sufficient and Necessary Condition) Conditions 1 and 2 are sufficient and necessary for the identifiability of all the DINA model parameters.

Remark 2

From the model construction, when there are some items that require none of the attributes, all the DINA model parameters are \(({\varvec{s}},{\varvec{p}})\) and \({\varvec{g}}^{-} = (g_j: \forall j \text{ such } \text{ that } {\varvec{q}}_j\ne \mathbf {0} )^\top \). Theorem 1 also applies to this special case that the proposed conditions still remain sufficient and necessary for the identifiability of \(({\varvec{s}},{\varvec{g}}^{-},{\varvec{p}})\), under a Q-matrix containing some all-zero \({\varvec{q}}\)-vectors. See Proposition 2 in Appendix for more details.

Conditions 1 and 2 are easy to verify. Based on Theorem 1, it is recommended in practice to design the Q-matrix such that it is complete, has each attribute required by at least three items, and has K distinct columns in the submatrix \(Q^*\). Otherwise, the model parameters would suffer from the non-identifiability issue. We use the following examples to illustrate the theoretical result.

Example 1

From Theorem 1, the Q-matrices in (4) and (5) satisfy both Conditions 1 and 2 and therefore give identifiable models, while the results in Xu and Zhang (2016) cannot be applied since their condition C4 does not hold. On the other hand, the Q-matrices below in (6) satisfy the necessary conditions in Xu and Zhang (2016), but they do not satisfy our Condition 2, so the corresponding models are not identifiable.

(6)

Example 2

To illustrate the necessity of Condition 2, we consider a simple case when \(K=2\). If Condition 1 is satisfied but Condition 2 does not hold, the Q-matrix can only have the following form up to some row permutations,

(7)

where the first two items give an identity matrix, while the next \(J_0\) items require none of the attributes and the last \(J-2-J_0\) items require both attributes. Under the Q-matrix in (7), we next show the model parameters \(({\varvec{s}},{\varvec{g}},{\varvec{p}})\) are not identifiable by constructing a set of parameters \((\bar{{\varvec{s}}},\bar{{\varvec{g}}},\bar{{\varvec{p}}})\ne ({\varvec{s}},{\varvec{g}},{\varvec{p}})\) which satisfy (2). Recall from the model setup in Sect. 2 that for any item \(j\in \{3,\ldots ,J_0+2\}\) that has \({\varvec{q}}_j=\mathbf {0}\), the guessing parameter is not needed by the DINA model and for notational convenience, we set \(g_j \equiv \bar{g}_j \equiv 0\). We take \(\bar{{\varvec{s}}} ={\varvec{s}}\), \(\bar{g}_j=g_j\) for \(j=J_0+3,\ldots , J\), and \(\bar{p}_{(11)}=p_{(11)}\). Next we show the remaining parameters \((g_1,g_2,p_{(00)}, p_{(10)},p_{(01)})\) are not identifiable. From Definition 1, the non-identifiability occurs if the following equations hold (see the Supplementary Material for the computational details): \(P\big ((R_1,R_2)=(r_1,r_2)\mid Q, \bar{{\varvec{s}}},\bar{{\varvec{g}}},\bar{{\varvec{p}}}\big ) = P\big ((R_1,R_2)=(r_1,r_2)\mid Q, {\varvec{s}},{\varvec{g}},{\varvec{p}}\big )\) for all \((r_1,r_2)\in \{0,1\}^2,\) where \((R_1,R_2)\) are the first two entries of the random response vector \({\varvec{R}}\). These equations can be further expressed as the following equations in (8):

$$\begin{aligned} (r_1,r_2) = {\left\{ \begin{array}{ll} (0,0):~ \bar{p}_{(00)} + \bar{p}_{(10)} + \bar{p}_{(01)} + p_{(11)} = p_{(00)} + p_{(10)} + p_{(01)}+ p_{(11)};\\ (1,0):~\bar{g}_1[\bar{p}_{(00)} + \bar{p}_{(01)}] + (1-s_1)[\bar{p}_{(10)} + p_{(11)}] \\ \qquad \qquad = g_1[ p_{(00)} + p_{(01)}] + (1-s_1)[ p_{(10)} + p_{(11)}];\\ (0,1):~\bar{g}_2[\bar{p}_{(00)} + \bar{p}_{(10)}] + (1-s_2)[\bar{p}_{(01)} + p_{(11)}] \\ \qquad \qquad = g_2[ p_{(00)} + p_{(10)}] + (1-s_2)[ p_{(01)} + p_{(11)}];\\ (1,1):~ \bar{g}_1\bar{g}_2\bar{p}_{(00)} + \bar{g}_1(1- s_2)\bar{p}_{(01)}+(1- s_2)\bar{g}_2\bar{p}_{(10)}+(1- s_1)(1- s_2) p_{(11)} \\ \qquad \qquad = g_1 g_2\bar{p}_{(00)} + g_1(1- s_2) p_{(01)}+(1- s_2) g_2 p_{(10)}+(1- s_1)(1- s_2) p_{(11)} . \end{array}\right. } \end{aligned}$$
(8)

For any \(({\varvec{s}},{\varvec{g}},{\varvec{p}})\), there are four constraints in (8) but five parameters \((\bar{g}_1,\bar{g}_2,\bar{p}_{(00)}, \bar{p}_{(10)},\bar{p}_{(01)})\) to solve. Therefore, there are infinitely many solutions and \(({\varvec{s}},{\varvec{g}},{\varvec{p}})\) are non-identifiable.

Example 3

We provide a numerical illustration of Example 2. Without loss of generality, we take \(J_0=0\), since whether there exist zero \({\varvec{q}}\)-vector items makes no impact on the non-identifiability phenomenon as illustrated in (8). We take \(J=10\) and set the true parameters to be \((p_{(00)},~p_{(10)},~p_{(01)},~p_{(11)})=(0.1,~ 0.3,~ 0.4,~ 0.2)\) and \(s_j=g_j=0.2\) for \(j\in \{1,\ldots , 10\}\). We first generate a random sample of size \(N=200\). From the data, we obtain one set of maximum likelihood estimators as follows:

$$\begin{aligned} \begin{aligned}&(\hat{p}_{(00)},~\hat{p}_{(10)},~\hat{p}_{(01)},~\hat{p}_{(11)}) =(\mathbf {0.22346},~\mathbf {0.26298},~\mathbf {0.32847},~0.18509);\\&\hat{\varvec{s}}= (0.1269,~0.1541,~0.0000,~0.2015,~0.1549,~0.2638,~0.3551,~0.1903,~0.1843,~0.1468);\\&\hat{\varvec{g}}= (\mathbf {0.1678},~\mathbf {0.2011},~ 0.2330,~ 0.1990,~ 0.2007,~ 0.2316,~ 0.2155,~ 0.1720,~ 0.2197,~ 0.1805). \end{aligned} \end{aligned}$$

Based on (8), we can construct infinitely many sets of \((\bar{{\varvec{s}}},\bar{{\varvec{g}}},\bar{{\varvec{p}}})\) that are also maximum likelihood estimators. For instance, we take \(\bar{{\varvec{s}}}=\hat{\varvec{s}}\), \(\bar{g}_j = \hat{g}_j\) for \(j=3,\ldots ,10\), \(\bar{p}_{(11)}=\hat{p}_{(11)}\), and \(\bar{p}_{(00)} = 0.998\cdot \hat{p}_{(00)}\). Then solve (8) for the remaining parameters \(\bar{p}_{(10)}\), \(\bar{p}_{(01)}\), \(\bar{g}_1\) and \(\bar{g}_2\) to get

$$\begin{aligned} \bar{p}_{(00)} = \mathbf {0.22301},\quad \bar{p}_{(01)} = \mathbf {0.33306},\quad \bar{p}_{(10)} = \mathbf {0.25884},\quad \bar{g}_1 = \mathbf {0.2561},\quad \bar{g}_2 = \mathbf {0.1073}. \end{aligned}$$

The two different sets of values \((\hat{\varvec{s}},\hat{\varvec{g}},\hat{\varvec{p}})\) and \((\bar{{\varvec{s}}},\bar{{\varvec{g}}},\bar{{\varvec{p}}})\) both give the identical log-likelihood value \(-\,1132.1264\), which confirms the non-identifiability.

To further illustrate the above argument does not depend on the sample size, we generate a random sample of size \(N=10^5\) and obtain the following estimators:

$$\begin{aligned} \begin{aligned}&(\hat{p}_{(00)},~\hat{p}_{(10)},~\hat{p}_{(01)},~\hat{p}_{(11)}) =(\mathbf {0.10436},~\mathbf {0.29933},~\mathbf {0.39845},~0.19786);\\&\hat{\varvec{s}}= (0.1968 ,~ 0.1932 ,~ 0.2007 ,~ 0.2065 ,~ 0.2015 ,~ 0.2000 ,~ 0.2001 ,~ 0.1949 ,~ 0.1985 ,~ 0.2036);\\&\hat{\varvec{g}}= (\mathbf {0.1993},~ \mathbf {0.2006},~ 0.1995 ,~ 0.2010 ,~ 0.1971 ,~ 0.1983 ,~ 0.1995 ,~ 0.2022 ,~ 0.1989 ,~ 0.1988). \end{aligned} \end{aligned}$$

Similarly, we set \(\bar{{\varvec{s}}}=\hat{\varvec{s}}\), \(\bar{g}_j = \hat{g}_j\) for \(j=3,\ldots ,10\), \(\bar{p}_{(11)}=\hat{p}_{(11)}\), and \(\bar{p}_{(00)} = 0.998\cdot \hat{p}_{(00)}\). Solving (8) gives

$$\begin{aligned} \bar{p}_{(00)} = \mathbf {0.10415},\quad \bar{p}_{(01)} = \mathbf {0.40161},\quad \bar{p}_{(10)} = \mathbf {0.29638},\quad \bar{g}_1 = \mathbf {0.3212},\quad \bar{g}_2 = \mathbf {0.0458}. \end{aligned}$$

where the two different sets of values \((\hat{\varvec{s}},\hat{\varvec{g}},\hat{\varvec{p}})\) and \((\bar{{\varvec{s}}},\bar{{\varvec{g}}},\bar{{\varvec{p}}})\) both lead to the identical log-likelihood value \(-\,571659.1708\). This illustrates that the non-identifiability issue depends on the model setting instead of the sample size. In practice, as long as Conditions 1 and 2 do not hold, we may suffer from similar non-identifiability issues no matter how large the sample size is.

Identifiability is the prerequisite and a necessary condition for consistent estimation. Here, we say a parameter is consistently estimable if we can construct a consistent estimator for the parameter. That is, for parameter \(\beta \), there exists \(\hat{\beta }_N\) such that \(\hat{\beta }_N-\beta \rightarrow 0\) in probability as the sample size \(N\rightarrow \infty \). When the identifiability conditions are satisfied, we show that the maximum likelihood estimators (MLEs) of the DINA model parameters \(({\varvec{s}},{\varvec{g}},{\varvec{p}})\) are statistically consistent as \(N\rightarrow \infty \). For the observed responses \(\{{\varvec{R}}_i: i=1,\ldots , N\}\), we can write their likelihood function as

$$\begin{aligned} L_N({\varvec{s}},{\varvec{g}},{\varvec{p}};\,{\varvec{R}}_1,\ldots ,{\varvec{R}}_N) = \prod _{i=1}^N P({\varvec{R}}={\varvec{R}}_i\mid Q,{\varvec{s}},{\varvec{g}},{\varvec{p}}), \end{aligned}$$
(9)

where \(P({\varvec{R}}={\varvec{R}}_i\mid Q,{\varvec{s}},{\varvec{g}},{\varvec{p}})\) is as defined in (1). Let \((\hat{\varvec{s}}, \hat{\varvec{g}},\hat{\varvec{p}})\) be the corresponding MLEs based on (9). We have the following corollary.

Corollary 1

When Conditions 1 and 2 are satisfied, the MLEs \((\hat{\varvec{s}}, \hat{\varvec{g}},\hat{\varvec{p}})\) are consistent as \(N\rightarrow \infty \).

The results in Theorem 1 and Corollary 1 can be directly applied to the DINO model through the duality of the DINA and DINO models (see Proposition 1 in Chen et al., 2015). Specifically, when Conditions 1 and 2 are satisfied, the guessing, slipping, and population proportion parameters in the DINO model are identifiable and can also be consistently estimated as \(N\rightarrow \infty \).

Moreover, the proof of Corollary 1 can be directly generalized to the other CDMs that the MLEs of the model parameters, including the item parameters and population proportion parameters, are consistent as \(N\rightarrow \infty \) if they are identifiable. Therefore under the sufficient conditions for identifiability of general CDMs developed in the literature such as Xu (2017), the model parameters are also consistently estimable. Although the minimal requirement for identifiability and estimability of general CDMs is still unknown, the proposed Conditions 1 and 2 are necessary since the DINA model is a submodel of them. For instance, Xu (2017) requires two identity matrices in the Q-matrix to obtain identifiability, which automatically satisfies Conditions 1 and 2 in this paper.

We next present an example to illustrate that when the proposed conditions are satisfied, the MLEs of the DINA model parameters are consistent.

Example 4

We perform a simulation study with the following Q-matrix that satisfies the proposed sufficient and necessary conditions. The true parameters are set to be \(p_{{\varvec{\alpha }}}=0.125\) for all \({\varvec{\alpha }}\in \{0,1\}^3\), and \(s_j=g_j=0.2\) for \(j=1,\ldots ,6\).

$$\begin{aligned} Q=\begin{pmatrix} 1 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 1 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 1 \\ 0 &{}\quad 1 &{}\quad 1 \\ 1 &{}\quad 0 &{}\quad 1 \\ 1 &{}\quad 1 &{}\quad 0 \\ \end{pmatrix}, \end{aligned}$$

For each sample size \(N=200\cdot i\) where \(i=1,\ldots ,10\), we generate 1000 independent datasets and use the EM algorithm with random initializations to obtain the MLEs of model parameters for each dataset. The mean-squared errors (MSEs) of the parameters \({\varvec{s}}\), \({\varvec{g}}\), \({\varvec{p}}\) computed from the 1000 runs are shown in Table 1 and Fig. 1. One can see that the MSEs keep decreasing as the sample size N increases, matching the theoretical result in Corollary 1.

Table 1 MSEs of DINA model parameters.
Fig. 1
figure 1

MSE of DINA model parameters versus sample size N. (a) MSEs of p. (b) MSEs of s. (c) MSEs of g.

4 Discussion

This paper presents the sufficient and necessary condition for identifiability of the DINA and DINO model parameters and establishes the consistency of the maximum likelihood estimators. As discussed in Sect. 3, the results would also shed light on the study of the sufficient and necessary conditions for general CDMs.

This paper treats the attribute profiles as random effects from a population distribution. Under this setting, the identifiability conditions ensure the consistent estimation of the model parameters. However, generally in statistics and psychometrics, identifiability conditions are not always sufficient for consistent estimation. An example of identifiable but not consistently estimable is the fixed effects CDMs, where the subjects’ attribute profiles are taken as model parameters. Consider a simple example of the DINA model with nonzero but known slipping and guessing parameters. Under the fixed effects setting, the model parameters include \(\{{\varvec{\alpha }}_i, i=1,\ldots ,N\}\), which are identifiable if the Q-matrix is complete (e.g., Chiu et al., 2009). But with fixed number of items, even when the sample size N goes to infinity, the parameters \(\{{\varvec{\alpha }}_i, i=1,\ldots ,N\}\) cannot be consistently estimated. In this case, to have the consistent estimation of each \({\varvec{\alpha }}\), the number of items needs to go to infinity and the number of identity sub-Q-matrices also needs to go to infinity (Wang and Douglas, 2015), equivalently, there are infinitely many sub-Q-matrices satisfying Conditions 1 and 2.

When the identifiability conditions are not satisfied, we may expect to obtain partial identification results that certain parameters are identifiable, while others are only identifiable up to some transformations. For instance, when Condition 1 is satisfied, the slipping parameters are all identifiable and guessing parameters of items \((K+1,\ldots , J)\) are also identifiable. It is also possible in practice that there exist certain hierarchical structures among the latent attributes. For instance, an attribute may be a prerequisite for some other attributes. In this case, some entries of \({\varvec{p}}\) are restricted to be 0. It would also be interesting to consider the identifiability conditions under these restricted models. For these cases, weaker conditions are expected for identifiability of the model parameters. In particular, completeness of the Q-matrix may not be needed. We believe the techniques used in the proof of the main result can be extended to study such restricted models and would like to pursue this in the future.