May 2, 2008

Templates and Objects in Authoring Problem-Solving Assessments

Authors:
Terry P. Vendlinski, Eva L. Baker, and David Niemi
Assessing whether students can both re-present a corpus of learned knowledge and also demonstrate that they can apply that knowledge to solve problems is key to assessing student understanding. This notion, in turn, impacts our thinking about what we assess, how we author such assessments, and how we interpret assessment results. The diffusion of technology into venues of learning offers new opportunities in the area of student assessment. Specifically, computer-based simulations seem to provide sufficiently rich environments and the tools necessary to allow us to infer accurately how well a student’s individual mental model of the world can accommodate, integrate, and be used to exploit concepts from a domain of interest. In this paper then, we first identify the characteristics of simulations that our experience suggests are necessary to make them appropriate for pedagogical and assessment purposes. Next, we discuss the models and frameworks (templates) we have used to ensure these characteristics are considered. Finally, we describe two computerized instantiations (objects) of these frameworks and implications for the follow-on design of simulations.
Vendlinski, T. P., Baker, E., L., & Niemi, D. (2008). Templates and objects in authoring problem-solving assessments (CRESST Report 735). Los Angeles: University of California, Los Angeles, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).
This is a staging environment