Assessing English on the global stage : the British Council and English language testing, 1941-2016
MetadataShow full item record
AbstractThis book tells the story of the British Council’s seventy-five year involvement in the field of English language testing. The first section of the book explores the role of the British Council in spreading British influence around the world through the export of British English language examinations and British expertise in language testing. Founded in 1934, the organisation formally entered the world of English language testing with the signing of an agreement with the University of Cambridge Local Examination Syndicate (UCLES) in 1941. This agreement, which was to last until 1993, saw the British Council provide substantial English as a Foreign Language (EFL) expertise and technical and financial assistance to help UCLES develop their suite of English language tests. Perhaps the high points of this phase were the British Council inspired Cambridge Diploma of English Studies introduced in the 1940s and the central role played by the British Council in the conceptualisation and development of the highly innovative English Language Testing Service (ELTS) in the 1970s, the precursor to the present day International English Language Testing System (IELTS). British Council support for the development of indigenous national English language tests around the world over the last thirty years further enhanced the promotion of English and the creation of soft power for Britain. In the early 1990s the focus of the British Council changed from test development to delivery of British examinations through its global network. However, by the early years of the 21st century, the organisation was actively considering a return to test development, a strategy that was realised with the founding of the Assessment Research Group in early 2012. This was followed later that year by the introduction of the Aptis English language testing service; the first major test developed in-house for over thirty years. As well as setting the stage for the re-emergence of professional expertise in language testing within the organisation, these initiatives have resulted in a growing strategic influence for the organisation on assessment in English language education. This influence derives from a commitment to test localisation, the development and provision of flexible, accessible and affordable tests and an efficient delivery, marking and reporting system underpinned by an innovative socio-cognitive approach to language testing. This final period can be seen as a clear return by the British Council to using language testing as a tool for enhancing soft power for Britain: a return to the original raison d’etre of the organisation.
CitationWeir C J, O'Sullivan B (2017) 'Assessing English on the global stage : the British Council and English language testing, 1941-2016', in (ed(s).). , edn, London: Equinox.
Showing items related by title, author, creator and subject.
Validating a set of Japanese EFL proficiency tests: demonstrating locally designed tests meet international standardsDunlea, Jamie (University of BedfordshireUniversity of Bedfordshire, 2015-12)This study applied the latest developments in language testing validation theory to derive a core body of evidence that can contribute to the validation of a large-scale, high-stakes English as a Foreign Language (EFL) testing program in Japan. The testing program consists of a set of seven level-specific tests targeting different levels of proficiency. This core aspect of the program was selected as the main focus of this study. The socio-cognitive model of language test development and validation provided a coherent framework for the collection, analysis and interpretation of evidence. Three research questions targeted core elements of a validity argument identified in the literature on the socio-cognitive model. RQ 1 investigated the criterial contextual and cognitive features of tasks at different levels of proficiency, Expert judgment and automated analysis tools were used to analyze a large bank of items administered in operational tests across multiple years. RQ 2 addressed empirical item difficulty across the seven levels of proficiency. An innovative approach to vertical scaling was used to place previously administered items from all levels onto a single Rasch-based difficulty scale. RQ 3 used multiple standard-setting methods to investigate whether the seven levels could be meaningfully related to an external proficiency framework. In addition, the study identified three subsidiary goals: firstly, toevaluate the efficacy of applying international standards of best practice to a local context: secondly, to critically evaluate the model of validation; and thirdly, to generate insights directly applicable to operational quality assurance. The study provides evidence across all three research questions to support the claim that the seven levels in the program are distinct. At the same time, the results provide insights into how to strengthen explicit task specification to improve consistency across levels. This study is the largest application of the socio-cognitive model in terms of the amount of operational data analyzed, and thus makes a significant contribution to the ongoing study of validity theory in the context of language testing. While the study demonstrates the efficacy of the socio-cognitive model selected to drive the research design, it also provides recommendations for further refining the model, with implications for the theory and practice of language testing validation.
Developing a model for investigating the impact of language assessment within educational contexts by a public examination providerSaville, N.D. (University of BedfordshireUniversity of Bedfordshire, 2009-01)There is no comprehensive model of language test or examination impact and how it might be investigated within educational contexts by a provider of high-stakes examinations, such as an international examinations board. This thesis addresses the development of such a model from the perspective of Cambridge ESOL, a provider of English language tests and examinations in over 100 countries. The starting point for the thesis is a discussion of examinations within educational processes generally and the role that examinations board, such as Cambridge ESOL play within educational systems. The historical context and assessment tradition is an important part of this discussion. In the literature review, the effects and consequences of language tests and examinations are discussed with reference to the better known concept of washback and how impact can be defined as a broader notion operating at both micro and macro levels. This is contextualised within the assessment literature on validity theory and the application of innovation theories within educational systems. Methodologically, the research is based on a meta-analysis which is employed in order to describe and review three impact projects. These three projects were carried out by researchers based in Cambridge to implement an approach to test impact which had emerged during the 1990s as part of the test development and validation procedures adopted by Cambridge ESOL. Based on the analysis, the main outcome and contribution to knowledge is an expanded model of impact designed to provide examination providers with a more effective “theory of action”. When applied within Cambridge ESOL, this model will allow anticipated impacts of the English language examinations to be monitored more effectively and will inform on-going processes of innovation; this will lead to well-motivated improvements in the examinations and the related systems. Wider applications of the model in other assessment contexts are also suggested.