January 2, 1999
Assessing Students With Disabilities in Kentucky: The Effects of Accommodations, Format, and Subject
Authors:
Daniel Koretz and Laura Hamilton
In an earlier study (Koretz, 1997), we reported that Kentucky had been unusually successful in testing most students with disabilities, but we found numerous signs of poor measurement, including differential item functioning in mathematics, apparently excessive use of accommodations, and implausibly high mean scores for some groups of students with disabilities. This study used newer data to test the stability of the findings over a two-year period, to extend some of the analyses to additional subject areas, and to compare performance on open-response items to that on multiple-choice items, which were not administered in the assessment investigated earlier. We analyzed test score data from students in Grades 4, 5, 7, 8, and 11. The inclusiveness of the assessment persisted, and the frequency of specific accommodations remained unchanged. The mean performance of elementary school students with disabilities dropped substantially, however, apparently because of a lessened impact of accommodations – particularly dictation – on scores. These lower scores, while discouraging as an indication of student performance, appear to be more plausible. The differences in scores between disabled and non-disabled students tended to be larger on the multiple-choice components in the elementary grades, whereas the differences were generally similar or larger for open-response components in the higher grades. Across grades, the effects of accommodations were stronger on the open-response than on the multiple-choice tests. Correlations among parts of the assessment were different for accommodated students with disabilities than for others, with higher correlations across subjects for the open-response components. This may indicate that some accommodations change the dimensionality of the assessment. DIF was apparent in both the open-response and multiple-choice components of the assessment, but it was mostly limited to students who received accommodations. DIF was found in both formats. Further research and more detailed data concerning the specific uses of accommodations are needed to clarify the reasons for these findings and to guide the development of more effective approaches to the assessment of students with disabilities.
Koretz, D., & Hamilton, L. (1999). Assessing students with disabilities in Kentucky: The effects of accommodations, format, and subject (CSE Report 498). Los Angeles: University of California, Los Angeles, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).|Koretz, D., & Hamilton, L. (1999). Assessing students with disabilities in Kentucky: The effects of accommodations, format, and subject (CSE Report 498). Los Angeles: University of California, Los Angeles, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).