The IRMA Community
Newsletters
Research IRM
Click a keyword to search titles using our InfoSci-OnDemand powered search:
|
Automated Scoring of Multicomponent Tasks
Abstract
Assessment of real-world skills increasingly requires efficient scoring of non-routine test items. This chapter addresses the scoring and psychometric treatment of a broad class of automatically-scorable complex assessment tasks allowing a definite set of responses orderable by quality. These multicomponent tasks are described and proposals are advanced on how to score them so that they support capturing gradations of performance quality. The resulting response evaluation functions are assessed empirically against alternatives using data from a pilot of technology-enhanced items (TEIs) administered to a sample of high school students in one U.S. state. Results support scoring frameworks leveraging the full potential of multicomponent tasks for providing evidence of partial knowledge, understanding, or skill.
Related Content
Agah Tugrul Korucu, Handan Atun.
© 2017.
18 pages.
|
Larisa Olesova, Jieun Lim.
© 2017.
21 pages.
|
JoAnne Dalton Scott.
© 2017.
20 pages.
|
Geraldine E Stirtz.
© 2017.
25 pages.
|
Enilda Romero-Hall, Cristiane Rocha Vicentini.
© 2017.
21 pages.
|
Beth Allred Oyarzun, Sheri Anderson Conklin, Daisyane Barreto.
© 2017.
21 pages.
|
Nikolina Tsvetkova, Albena Antonova, Plama Hristova.
© 2017.
24 pages.
|
|
|