Cognitive assessment is a growing area in psychological and educational measurement where tests are given to assess mastery/deficiency of attributes or skills. discrete nature makes analysis diffcult because calculus tools are not applicable particularly. In fact most analyses are combinatorics based. Third the model makes explicit distributional assumptions on the (unobserved) attributes dictating the law of observed responses. The dependence of responses on attributes via attributes. Let A = (∈ {0 1 be the indicator of the presence or absence of the items. Let R = (∈ {0 1 be the response to the and are known. Q-matrix the link between item attributes and responses. We define an × matrix = (and = 1 when item requires attribute and 0 otherwise. Furthermore we define = 1 … is the response vector of subject is the attribute vector LY2157299 of subject to denote sample size. The attributes Aare not observed. Our objective is to make inference on the ? 1 columns each of which corresponds to one non-zero attribute vector A ∈ {0 1 … 0 Instead of labeling the columns of be a generic notation for positive responses to item ? 1 rows. We will later say that such a (Definition 2.1 in Section 2.4). We describe each row vector of now ? 1 dimensional row vector. Using the same labeling system as that of the columns of = (is be a column vector the length of which equals to the number of rows of corresponds to one row vector of corresponding to is defined as denotes the number of people who have positive responses to items = (contains the proportions of subjects with each attribute profile. For each set of items matrix multiplication sums up the LY2157299 proportions corresponding to each attribute profile capable of responding positively to that set of items giving us the total proportion of subjects who respond positively to the corresponding items. The estimator of the Q-matrix For each × binary matrix can be obtained by minimizing × binary matrices. Note that the minimizers are not unique. We shall later prove that the minimizers are in the same meaningful equivalence class. Because of (2.5) the true = 0 1 is not observed. Just for illustration we construct a simple non-saturated = (solves linear equation ∈ [0 1 and is the true matrix. 2.4 Basic results Before stating the main result we provide a list of notations which will be used in the discussions. Linear space spanned by vectors denotes the submatrix containing the first rows and all columns of denotes a column vector such that the denotes the × identity matrix. For a matrix of denotes the set of column vectors of denotes the set of row vectors of ? 1 dimensional vector and be two dimensional vectors. We write ? if > for all 1 ≤ ≤ ? if ≠ for all = 1 … denotes the true × and matrix binary matrix. The following definitions shall be used in subsequent discussions. 2.1 We say that if all combinations of form = 1 … 2.2 We write ~ and ? 2.1. It is not hard to show that “~” is an ~ is interpreted as an attribute. Permuting the columns of is equivalent to relabeling the attributes. For ~ from 2.3. A if {: = 1 … (is the set of row vectors of is ≥ 2.2. One of the main objectives of cognitive assessment is to identify the subjects’ attributes; see [22] for other applications. It has been established in [1] that the completeness of the is complete. C2 LY2157299 i are.i.d. random vectors following distribution (2.7) × (2.6). > 0 2.3 If belongs to. Note that 2 LY2157299 also.4. In order to obtain the consistency Ncf1 of (subject to a column permutation) it is necessary to have p* not living on some sub-manifold. To see a counter example suppose that = (1 … 1 = 1 that is all subjects are able to solve all problems. Therefore the distribution of R is independent of and such that alone attribute alone or both; see [16 17 for similar cases for the multidimensional IRT models. 2.5 Note that the estimator of the attribute distribution such that = whenever ~ 2.6. One practical issue associated with the proposed procedure is the computation. For a is or specific large the computation overhead of searching the minimizer of × matrices could be substantial. One practical solution is to break the items in to groups (possibly with non-empty overlap across different groups). Then apply the proposed estimator to each of the combined group of items. This is equivalent to breaking a big by Q-matrix into several smaller matrices and estimating each of them separately. Lastly.