October 1, 2006
Studying the Sensitivity of Inferences to Possible Unmeasured Confounding Variables in Multisite Evaluations
Authors:
Michael Seltzer, Jinok Kim and Ken Frank
In multisite evaluation studies, questions of primary interest often focus on whether particular facets of implementation or other aspects of classroom or school environments are critical to a program’s success. However, the differences with which teachers implement programs can depend on an array of factors, including differences in their training and experience, in the prior preparation of their students, and in the degree of support they receive from school administrators. As such, a crucially important implication is that in studying connections between various aspects of implementation and the effectiveness of programs, we need to be alert to factors that may be confounded with differences in implementation. Despite our best efforts to anticipate and measure possible confounding variables, teachers who differ in terms of the quality and frequency with which they implement various program practices use particular program materials and the like, may differ in important ways that have not been measured, giving rise to possible hidden bias. In this paper, we extend Frank’s (2000) work on assessing the impact of omitted confounding variables on coefficients of interest in regression settings to applications of HMs in multiste settings in which interest centers on testing whether certain aspects of implementation are critical to a program’s success. We provide a detailed illustrative example using the data from a study focusing on the effects of reform-minded instructional practices in mathematics (Gearhart et al., 1999; Saxe et al., 1999).
Seltzer, M., Kim, J., & Frank, K. (2006). Studying the sensitivity of inferences to possible unmeasured confounding variables in multisite evaluations (CSE Report 701). Los Angeles: University of California, Los Angeles, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).