Assessment Using Complex Tasks: Magnificently Messy Modeling
There has been increasing interest in using complex tasks, such as simulations and games, for assessment of higher-level cognitive skills. Advocates argue that constructs such as problem solving, scientific inquiry and collaboration cannot be adequately measured with traditional discrete multiple-choice items. Complex tasks, however, come at a cost. They take longer to perform, are context dependent, and involve the interaction of many student factors, some of which are not construct relevant. To enable valid inferences about student abilities based on complex tasks, we will need models that can handle the extra cognitive complexity that such tasks imply. In this talk I will discuss both the utility and difficulties of using complex tasks for assessment. Then I will present two models that may be appropriate for use with complex tasks. The first is an IRT mixture model which separates the student’s task conception from their procedural ability and estimates both. The second, based on a Markov decision process cognitive model, enables the modeling of actions students take as they work toward a goal by representing the latent structures of student goals, beliefs and abilities. Both simulation and application studies will be presented to illustrate the potential utility of these models.
Michelle LaMar is an associate research scientist in the Research and Development division of Educational Testing Service in their San Francisco office. Her current research focuses on the development of psychometric models appropriate for use with complex assessment tasks such as simulations or games. She is particularly interested in modeling task-process data using dynamic cognitive models to enable valid inference about multiple layers of student cognition. Michelle has worked at WestEd, providing statistical analysis, psychometric modeling and consulting for their innovative, computer-based science assessments. She also worked at the UC Berkeley’s BEAR center on the assessment of 21st century skills. Prior to her doctoral work, Michelle was a senior software engineer with the California State University’s Center for Distributed Learning, where she led the development of instructional software and academic tools. Michelle received her Ph.D. in educational measurement from the University of California, Berkeley in 2014, her M.A. in education from Sonoma State University in 2009 and her B.A. in philosophy and physics from the University of Chicago in 1988.