Developing an Essential Feature of Test Validity Arguments: Alignment Among the Test Design, Interpretation of Test Outcomes, and Evidence for Validity

The AERA Division D Robert L. Linn Distinguished Address Award,
given at the AERA Annual Meeting, April 2018

Currently, the standard conceptualization of test validity (Kane, 2006) sees it as being composed of two parts: (a) An interpretive argument that specifies the proposed interpretations and uses of scores by laying out a chain or network of inferences and assumptions leading from the observed performances to the conclusions and decisions based on the scores and (b) A validity argument which provides an evaluation of the interpretive argument’s coherence and of the plausibility of its inferences and assumptions. In this talk, I extend this conceptualization by emphasizing the importance of the establishment of the claim of these arguments, that is, the conceptual, theoretical and empirical work that is needed before any interpretations can be made, and that is the background to the validity evidence. I will use the context of an assessment based on a novel mathematics curriculum (Assessing Data Modeling) developed using the BEAR Assessment System.

Mark Wilson's interests focus on measurement and applied statistics. His work spans a range of issues in measurement and assessment from the development of new statistical models for analyzing measurement data, to the development of new assessments in subject matter areas such as science education, patient-reported outcomes and child development, to policy issues in the use of assessment data in accountability systems.

Tuesday, August 28, 2018 - 2:00pm
2121 Berkeley Way
PDF icon Slide1-21.17 MB
PDF icon Slide2-21.91 MB