Abstract
Existing research does not quantify and compare the differences between automated and manual assessment in the context of feedback on programming assignments. This makes it hard to reason about the effects of adopting automated assessment at the expense of manual assessment. Based on a controlled experiment involving N=117 undergraduate first-semester CS1 students, we compare the effects of having access to feedback from: i) only automated assessment, ii) only manual assessment (in the form of teaching assistants), and iii) both automated as well as manual assessment. The three conditions are compared in terms of (objective) task effectiveness and from a (subjective) student perspective.
The experiment demonstrates that having access to both forms of assessment (automated and manual) is superior both from a task effectiveness as well as a student perspective. We also find that the two forms of assessment are complementary: automated assessment appears to be better in terms of task effectiveness; whereas manual assessment appears to be better from a student perspective. Further, we found that automated assessment appears to be working better for men than women, who are significantly more inclined towards manual assessment. We then perform a cost/benefit analysis which leads to the identification of four equilibria that appropriately balance costs and benefits. Finally, this gives rise to four recommendations of when to use which kind or combination of feedback (manual and/or automated), depending on the number of students and the amount of per-student resources available. These observations provide educators with evidence-based justification for budget requests and considerations on when to (not) use automated assessment.
The experiment demonstrates that having access to both forms of assessment (automated and manual) is superior both from a task effectiveness as well as a student perspective. We also find that the two forms of assessment are complementary: automated assessment appears to be better in terms of task effectiveness; whereas manual assessment appears to be better from a student perspective. Further, we found that automated assessment appears to be working better for men than women, who are significantly more inclined towards manual assessment. We then perform a cost/benefit analysis which leads to the identification of four equilibria that appropriately balance costs and benefits. Finally, this gives rise to four recommendations of when to use which kind or combination of feedback (manual and/or automated), depending on the number of students and the amount of per-student resources available. These observations provide educators with evidence-based justification for budget requests and considerations on when to (not) use automated assessment.
| Originalsprog | Engelsk |
|---|---|
| Titel | Proceedings of the 23rd Koli Calling International Conference on Computing Education Research, Koli Calling 2023, Koli, Finland, November 13-18, 2023 |
| Redaktører | Andreas Mühling, Ilkka Jormanainen |
| Antal sider | 10 |
| Udgivelsessted | New York, NY, USA |
| Forlag | Association for Computing Machinery |
| Publikationsdato | 2023 |
| Sider | 2:1-2:10 |
| Artikelnummer | 2 |
| ISBN (Elektronisk) | 9798400716539 |
| DOI | |
| Status | Udgivet - 2023 |
| Begivenhed | Koli Calling International Conference on Computing Education Research - Koli, Finland Varighed: 13 nov. 2023 → 18 nov. 2023 Konferencens nummer: 23 |
Konference
| Konference | Koli Calling International Conference on Computing Education Research |
|---|---|
| Nummer | 23 |
| Land/Område | Finland |
| By | Koli |
| Periode | 13/11/2023 → 18/11/2023 |