Ben Hansen (PSC, U of M)
11/04/2013, at noon in room 6050 ISR-Thompson.
The No Child Left Behind Act of 2002 increased the quantity of education data that states collect and store, and subsequent state and federal initiatives have improved the quality of state K-12 databases. The data systems that result remain imperfect and incomplete, to be sure, but they beg to be used -- for example, to assess educational programs and policies. Because the data are strictly regulated by FERPA, however, assessments produced from them generally settle for aggregated, unadjusted measures, or are time-consuming and expensive to produce.
The talk describes statistical methods and procedures developed for a Gates-funded initiative, the Evaluation Engine, that aims to remedy this situation by making available to state education agencies and school districts fast, automated comparisons of program participants to comparison subjects matched to them within state databases. Novelties of the method include the manner of its use of propensity scores; specially constructed matching variables describing students' educational contexts; and its approach to analysis of key conclusions' sensitivity to limitations of the data system as a basis for matching. The sensitivity analysis culminates in an easy-to-understand visual display, and promises to include many more stakeholders than before in quantitatively specific deliberations about potential impacts of unmeasured confounding.