Using Generalizability Theory in Estimating Reliability of a Mathematical Competence Assessment Test of Fourth Year Primary School Students
Keywords:
Generalizability Theory, Reliability, Competence Assessment Test, Sources of Error Variance, Complex TasksAbstract
The current study used Generalizability Theory to estimate the reliability of a mathematical competence assessment test. During the study, the test was composed of nine different complex task formats: a) three well-defined tasks, b) three ill-defined tasks and c) three tasks with parasite data. These tasks were administered to a sample of (331) fourth year primary school students. Three trained raters participated in the scoring process by means of analytic scoring rubrics. Data collected were analyzed in terms of a fully crossed two-faceted design “person× task× rater” using “EduG” package. Research results showed substantial sources of error due to person-task interaction effect and task main effect. To ensure acceptable levels of reliability, the number of tasks should be increased but not the number of raters. As such, special caution should be put on the use of complex tasks in competence assessment measures.