Establishing test form and individual task comparability: a case study of a semi-direct speaking test
Abstract
Examination boards are often criticized for their failure to provide evidence of comparability across forms, and few such studies are publicly available. This study aims to investigate the extent to which three forms of the General English Proficiency Test Intermediate Speaking Test (GEPTS-I) are parallel in terms of two types of validity evidence: parallel-forms reliability and content validity. The three trial test forms, each containing three different task types (read-aloud, answering questions and picture description), were administered to 120 intermediate-level EFL learners in Taiwan. The performance data from the different test forms were analysed using classical procedures and Multi-Faceted Rasch Measurement (MFRM). Various checklists were also employed to compare the tasks in different forms qualitatively in terms of content. The results showed that all three test forms were statistically parallel overall and Forms 2 and 3 could also be considered parallel at the individual task level. Moreover, sources of variation to account for the variable difficulty of tasks in Form 1 were identified by the checklists. Results of the study provide insights for further improvement in parallel-form reliability of the GEPTS-I at the task level and offer a set of methodological procedures for other exam boards to consider. © 2006 Edward Arnold (Publishers) Ltd.Citation
Weir CJ, Wu JRW (2006) 'Establishing test form and individual task comparability: a case study of a semi-direct speaking test', Language Testing, 23 (2), pp.167-197.Publisher
SAGEJournal
Language TestingAdditional Links
https://journals.sagepub.com/doi/10.1191/0265532206lt326oaType
ArticleLanguage
enISSN
0265-5322ae974a485f413a2113503eed53cd6c53
10.1191/0265532206lt326oa