Linking tests of English for academic purposes to the CEFR: the score user’s perspective
dc.contributor.author | Green, Anthony | en |
dc.date.accessioned | 2017-11-23T11:03:54Z | |
dc.date.available | 2017-11-23T11:03:54Z | |
dc.date.issued | 2017-11-13 | |
dc.identifier.citation | Green A (2017) 'Linking tests of English for academic purposes to the CEFR: the score user’s perspective', Language Assessment Quarterly 15 (1) 59-74 | en |
dc.identifier.issn | 1543-4303 | |
dc.identifier.doi | 10.1080/15434303.2017.1350685 | |
dc.identifier.uri | http://hdl.handle.net/10547/622401 | |
dc.description.abstract | The Common European Framework of Reference for Languages (CEFR) is widely used in setting language proficiency requirements, including for international students seeking access to university courses taught in English. When different language examinations have been related to the CEFR, the process is claimed to help score users, such as university admissions staff, to compare and evaluate these examinations as tools for selecting qualified applicants. This study analyses the linking claims made for four internationally recognised tests of English widely used in university admissions. It uses the Council of Europe’s (2009) suggested stages of specification, standard setting, and empirical validation to frame an evaluation of the extent to which, in this context, the CEFR has fulfilled its potential to “facilitate comparisons between different systems of qualifications.” Findings show that testing agencies make little use of CEFR categories to explain test content; represent the relationships between their tests and the framework in different terms; and arrive at conflicting conclusions about the correspondences between test scores and CEFR levels. This raises questions about the capacity of the CEFR to communicate competing views of a test construct within a coherent overarching structure. | |
dc.language.iso | en | en |
dc.publisher | Taylor and Francis | en |
dc.relation.url | http://www.tandfonline.com/doi/full/10.1080/15434303.2017.1350685 | en |
dc.rights | Green - can archive pre-print and post-print or publisher's version/PDF | |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | * |
dc.subject | language assessment | en |
dc.subject | X162 Teaching English as a Foreign Language (TEFL) | en |
dc.title | Linking tests of English for academic purposes to the CEFR: the score user’s perspective | en |
dc.type | Article | en |
dc.identifier.eissn | 1543-4311 | |
dc.identifier.journal | Language Assessment Quarterly | en |
dc.date.updated | 2017-11-23T10:56:32Z | |
dc.description.note | If you wish this article to be eligible for the REF, please attach a copy of the postprint (the accepted manuscript after peer review, but before copy-editing). The publisher does not allow their final pdf to be put into repositories for copyright reasons. (Please see http://authorservices.taylorandfrancis.com/sharing-your-work/ for further details). | |
html.description.abstract | The Common European Framework of Reference for Languages (CEFR) is widely used in setting language proficiency requirements, including for international students seeking access to university courses taught in English. When different language examinations have been related to the CEFR, the process is claimed to help score users, such as university admissions staff, to compare and evaluate these examinations as tools for selecting qualified applicants. This study analyses the linking claims made for four internationally recognised tests of English widely used in university admissions. It uses the Council of Europe’s (2009) suggested stages of specification, standard setting, and empirical validation to frame an evaluation of the extent to which, in this context, the CEFR has fulfilled its potential to “facilitate comparisons between different systems of qualifications.” Findings show that testing agencies make little use of CEFR categories to explain test content; represent the relationships between their tests and the framework in different terms; and arrive at conflicting conclusions about the correspondences between test scores and CEFR levels. This raises questions about the capacity of the CEFR to communicate competing views of a test construct within a coherent overarching structure. |