The IRMA Community
Newsletters
Research IRM
Click a keyword to search titles using our InfoSci-OnDemand powered search:
|
A Case Study of Peer Assessment in a Composition MOOC: Students' Perceptions and Peer-grading Scores versus Instructor-grading Scores
Abstract
The large enrollments of multiple thousands of students in MOOCs seem to exceed the assessment capacity of instructors; therefore, the inability for instructors to grade so many papers is likely responsible for MOOCs turning to peer assessment. However, there has been little empirical research about peer assessment in MOOCs, especially composition MOOCs. This study aimed to address issues in peer assessment in a composition MOOC, particularly the students' perceptions and the peer-grading scores versus instructor-grading scores. The findings provided evidence that peer assessment was well received by the majority of students although many students also expressed negative feelings about this activity. Statistical analysis shows that there were significant differences in the grades given by students and those given by the instructors, which means the grades the students awarded to their peers tended to be higher in comparison to the instructor-assigned grades. Based on the results, this study concludes with implementations for peer assessment in a composition MOOC context.
Related Content
Agah Tugrul Korucu, Handan Atun.
© 2017.
18 pages.
|
Larisa Olesova, Jieun Lim.
© 2017.
21 pages.
|
JoAnne Dalton Scott.
© 2017.
20 pages.
|
Geraldine E Stirtz.
© 2017.
25 pages.
|
Enilda Romero-Hall, Cristiane Rocha Vicentini.
© 2017.
21 pages.
|
Beth Allred Oyarzun, Sheri Anderson Conklin, Daisyane Barreto.
© 2017.
21 pages.
|
Nikolina Tsvetkova, Albena Antonova, Plama Hristova.
© 2017.
24 pages.
|
|
|