Language assessment literacy for learning-oriented language assessment
Abstract
A small-scale and exploratory study explored a set of authentic speaking test video samples from the Cambridge: First (First Certificate of English) speaking test, in order to learn whether, and where, opportunities might be revealed in, or inserted into formal speaking tests, order to provide language assessment literacy opportunities for language teachers teaching in test preparation courses as well as teachers training to become speaking test raters. By paying particular attention to some basic components of effective interaction that we would want an examiner or interlocutor to exhibit if they seek to encourage interactive responses from test candidates. Looking closely at body language (in particular eye contact; intonation, pacing and pausing), management of turn-taking, and elicitation of candidate-candidate interaction we saw ways in which a shift in focus to view tests as learning opportunities is possible: we call this new focus learning-oriented language assessment (LOLA).Citation
Hamp-Lyons L (2017) 'Language assessment literacy for learning-oriented language assessment', Papers in Language Testing and Assessment, 6 (1), pp.88-111.Additional Links
http://www.altaanz.org/uploads/5/9/0/8/5908292/7.si5hamplyons_final_formatted_proofed.pdfType
ArticleLanguage
enISSN
2201-0009Collections
The following license files are associated with this item:
- Creative Commons
Except where otherwise noted, this item's license is described as http://creativecommons.org/licenses/by-nc-nd/4.0/
Related items
Showing items related by title, author, creator and subject.
-
Validating a set of Japanese EFL proficiency tests: demonstrating locally designed tests meet international standardsDunlea, Jamie (University of BedfordshireUniversity of Bedfordshire, 2015-12)This study applied the latest developments in language testing validation theory to derive a core body of evidence that can contribute to the validation of a large-scale, high-stakes English as a Foreign Language (EFL) testing program in Japan. The testing program consists of a set of seven level-specific tests targeting different levels of proficiency. This core aspect of the program was selected as the main focus of this study. The socio-cognitive model of language test development and validation provided a coherent framework for the collection, analysis and interpretation of evidence. Three research questions targeted core elements of a validity argument identified in the literature on the socio-cognitive model. RQ 1 investigated the criterial contextual and cognitive features of tasks at different levels of proficiency, Expert judgment and automated analysis tools were used to analyze a large bank of items administered in operational tests across multiple years. RQ 2 addressed empirical item difficulty across the seven levels of proficiency. An innovative approach to vertical scaling was used to place previously administered items from all levels onto a single Rasch-based difficulty scale. RQ 3 used multiple standard-setting methods to investigate whether the seven levels could be meaningfully related to an external proficiency framework. In addition, the study identified three subsidiary goals: firstly, toevaluate the efficacy of applying international standards of best practice to a local context: secondly, to critically evaluate the model of validation; and thirdly, to generate insights directly applicable to operational quality assurance. The study provides evidence across all three research questions to support the claim that the seven levels in the program are distinct. At the same time, the results provide insights into how to strengthen explicit task specification to improve consistency across levels. This study is the largest application of the socio-cognitive model in terms of the amount of operational data analyzed, and thus makes a significant contribution to the ongoing study of validity theory in the context of language testing. While the study demonstrates the efficacy of the socio-cognitive model selected to drive the research design, it also provides recommendations for further refining the model, with implications for the theory and practice of language testing validation.
-
Linking writing and speaking in English as a Second Language assessmentHamp-Lyons, Liz (Hampton Press, 2012-03)
-
Developing a model for investigating the impact of language assessment within educational contexts by a public examination providerSaville, N.D. (University of BedfordshireUniversity of Bedfordshire, 2009-01)There is no comprehensive model of language test or examination impact and how it might be investigated within educational contexts by a provider of high-stakes examinations, such as an international examinations board. This thesis addresses the development of such a model from the perspective of Cambridge ESOL, a provider of English language tests and examinations in over 100 countries. The starting point for the thesis is a discussion of examinations within educational processes generally and the role that examinations board, such as Cambridge ESOL play within educational systems. The historical context and assessment tradition is an important part of this discussion. In the literature review, the effects and consequences of language tests and examinations are discussed with reference to the better known concept of washback and how impact can be defined as a broader notion operating at both micro and macro levels. This is contextualised within the assessment literature on validity theory and the application of innovation theories within educational systems. Methodologically, the research is based on a meta-analysis which is employed in order to describe and review three impact projects. These three projects were carried out by researchers based in Cambridge to implement an approach to test impact which had emerged during the 1990s as part of the test development and validation procedures adopted by Cambridge ESOL. Based on the analysis, the main outcome and contribution to knowledge is an expanded model of impact designed to provide examination providers with a more effective “theory of action”. When applied within Cambridge ESOL, this model will allow anticipated impacts of the English language examinations to be monitored more effectively and will inform on-going processes of innovation; this will lead to well-motivated improvements in the examinations and the related systems. Wider applications of the model in other assessment contexts are also suggested.