Multiple-response items, sequencing items, and matching items are three innovative item types often included in systems for computer-based assessment that offer the benefit of polytomous scoring and the possibility to measure partial knowledge. In the present study, different scoring methods of these three item types were compared. Based on the assumption that different response patterns to these item types represent different knowledge levels, these knowledge levels are described. Features of different scoring methods were studied to select the scoring methods included in this study. Subsequently, a probability distribution of scoring results for each knowledge level was derived and computed. Based on classical test theory, a measure for the reliability of the different scoring methods on the level of a single item was derived. To compare the results of the scoring methods selected, reliabilities were computed for several distributions of knowledge levels in a population. For a multiple-response item, when an examinee must select all the right options, the dichotomous scoring method resulted in higher reliabilities than scoring the response patterns polytomously. For matching items and for multiple-response items, when an examinee is asked to select fewer options than the total number of right options given, polytomous scoring methods gave higher reliabilities than the dichotomous scoring method. Simple polytomous scoring by counting the selected right options or relations is recommended instead of more complex polytomous scoring methods, for instance, using a correction for wrong answers or a so-called "floor". The results of scoring sequencing items were not as conclusive as for the other two item types explored.
Keywords: Innovating item types, reliability, multiple response items, matching items, sequencing items