December 1, 2013

Automatically Scoring Short Essays for Content

Authors:
Deirdre Kerr and Hamid Mousavi
The Common Core assessments emphasize short essay constructed response items over multiple choice items because they are more precise measures of understanding. However, such items are too costly and time consuming to be used in national assessments unless a way is found to score them automatically. Current automatic essay scoring techniques are inappropriate for scoring the content of an essay because they rely on either grammatical measures of quality or machine learning techniques, neither of which identifies statements of meaning (propositions) in the text. In this report, we explain our process of (1) extracting meaning from student essays in the form of propositions using our text mining framework called SemScape, (2) using the propositions to score the essays, and (3) testing our system’s performance on two separate sets of essays. Results demonstrate the potential of this purely semantic process and indicate that the system can accurately extract propositions from student short essays, approaching or exceeding standard benchmarks for scoring performance.
Kerr, D., Mousavi, H., & Iseli, M. R. (2013). Automatically scoring short essays for content (CRESST Report 836). Los Angeles: University of California, Los Angeles, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).|Kerr, D., Mousavi, H., & Iseli, M. R. (2013). Automatically scoring short essays for content (CRESST Report 836). Los Angeles: University of California, Los Angeles, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).
This is a staging environment