Show simple item record

dc.contributor.authorGalaczi, Evelina D.en_GB
dc.contributor.authorffrench, Angelaen_GB
dc.contributor.authorHubbard, Chrisen_GB
dc.contributor.authorGreen, Anthonyen_GB
dc.date.accessioned2012-05-21T14:16:57Zen
dc.date.available2012-05-21T14:16:57Zen
dc.date.issued2011-08en
dc.identifier.citationDeveloping assessment scales for large-scale speaking tests: a multiple-method approach 2011, 18 (3):217 Assessment in Education: Principles, Policy & Practice pp217 - 237en_GB
dc.identifier.issn0969-594Xen
dc.identifier.issn1465-329Xen
dc.identifier.doi10.1080/0969594X.2011.574605en
dc.identifier.urihttp://hdl.handle.net/10547/224996en
dc.description.abstractThe process of constructing assessment scales for performance testing is complex and multi-dimensional. As a result, a number of different approaches, both empirically and intuitively based, are open to developers. In this paper we outline the approach taken in the revision of a set of assessment scales used with speaking tests, and present the value of combining methodologies to inform and refine scale development. We set the process in the context of the growing influence of the Common European Framework of Reference (Council of Europe 2001) and outline a number of stages in terms of the procedures followed and outcomes produced. The findings describe a range of data that was collected and analysed through a number of phases and used to inform the revision of the scales, including consultation with experts, and data-driven qualitative and quantitative research studies. The overall aim of the paper is to illustrate the importance of combining intuitive and data-driven scale construction methodologies, and to suggest a usable scale construction model for application or adaptation in a variety of contexts.
dc.language.isoenen
dc.publisherTaylor and Francisen_GB
dc.relation.urlhttp://www.tandfonline.com/doi/abs/10.1080/0969594X.2011.574605en_GB
dc.titleDeveloping assessment scales for large-scale speaking tests: a multiple-method approachen
dc.typeArticleen
dc.contributor.departmentUniversity of Cambridgeen_GB
dc.contributor.departmentUniversity of Bedfordshireen_GB
dc.identifier.journalAssessment in Education: Principles, Policy & Practiceen_GB
html.description.abstractThe process of constructing assessment scales for performance testing is complex and multi-dimensional. As a result, a number of different approaches, both empirically and intuitively based, are open to developers. In this paper we outline the approach taken in the revision of a set of assessment scales used with speaking tests, and present the value of combining methodologies to inform and refine scale development. We set the process in the context of the growing influence of the Common European Framework of Reference (Council of Europe 2001) and outline a number of stages in terms of the procedures followed and outcomes produced. The findings describe a range of data that was collected and analysed through a number of phases and used to inform the revision of the scales, including consultation with experts, and data-driven qualitative and quantitative research studies. The overall aim of the paper is to illustrate the importance of combining intuitive and data-driven scale construction methodologies, and to suggest a usable scale construction model for application or adaptation in a variety of contexts.


This item appears in the following Collection(s)

Show simple item record