August 2, 1995
Monitoring and Improving a Portfolio Assessment System
Authors:
Carol M. Myford and Robert J. Mislevy
One of the promises of performance assessment is to support instruction and learning while accurately measuring student skills tied to high standards. Establishing and refining such a framework is especially difficult in large-scale settings and for high stakes purposes. This study argues that both qualitative and quantitative perspectives are needed to accomplish this task, and illustrates their interplay in analyses of the College Entrance Examination Board’s Advanced placement Studio Art Portfolio Assessment. Naturalistic techniques are used to investigate the kinds of evidence, inference, arguments, and standards that underlie ratings of the students’ submissions. Quantitative methods are used to determine the accuracy of the assessment, differences in rating harshness, effect of readers’ harshness on student scores, consistency in application of rating criteria, aspects of raters’ backgrounds and training that might influence ratings, and the reasonableness of calibrating ratings from different sections of the exam into a single score. (Note: This study was also published by the Center for Performance Assessment at the Educational Testing Service.)
Myford, C. M., & Mislevy, R. J. (1995). Monitoring and improving a portfolio assessment system (CSE Report 402). Los Angeles: University of California, Los Angeles, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).|Myford, C. M., & Mislevy, R. J. (1995). Monitoring and improving a portfolio assessment system (CSE Report 402). Los Angeles: University of California, Los Angeles, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).