October 1, 2003
Teachers’ Responses to High-Stakes Testing and the Validity of Gains: A Pilot Study
Authors:
Daniel M. Koretz and Laura S. Hamilton
Previous studies of the validity of gains on high-stakes tests have compared trends in scores on a high-stakes test to trends on a lower-stakes test, such as NAEP. However, generalizability of gains is likely to be incomplete even when gains are meaningful because of differences in the inferences the two tests are designed to support. Therefore, this simple approach is useful only when the disparity in trends on the two tests is very large. A more sensitive but difficult approach requires identifying the specific aspects of performance that increase by varying amounts and comparing these to the specific inferences users base on the score increases. A key to this approach may be identifying the aspects of performance that teachers focus on in their attempts to raise scores. This report presents the results of a pilot study evaluating several types of survey questions designed to elicit from teachers detailed information on their instructional responses to testing. The types of responses explored are those that previous CRESST work (Koretz, McCaffrey, & Hamilton, 2001) suggested are important for validating score gains. Of the formats used, the most promising appears to be questions, the prompts for which are actual test items, including both items from the high-stakes test for which the teachers are preparing and other tests.
Koretz, D. M., & Hamilton, L. S. (2003). Teachers’ responses to high-stakes testing and the validity of gains: A pilot study (CSE Report 610). Los Angeles: University of California, Los Angeles, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).