• Language assessment literacy for learning-oriented language assessment

      Hamp-Lyons, Liz (Australian Association of Applied Linguistics, 2017-12-16)
       A small-scale and exploratory study explored a set of authentic speaking test video samples from the Cambridge: First (First Certificate of English) speaking test, in order to learn whether, and where, opportunities might be revealed in, or inserted into formal speaking tests, order to provide language assessment literacy opportunities for language teachers teaching in test preparation courses as well as teachers training to become speaking test raters. By paying particular attention to some basic components of effective interaction that we would want an examiner or interlocutor to exhibit if they seek to encourage interactive responses from test candidates. Looking closely at body language (in particular eye contact; intonation, pacing and pausing), management of turn-taking, and elicitation of candidate-candidate interaction we saw ways in which a shift in focus to view tests as learning opportunities is possible: we call this new focus learning-oriented language assessment (LOLA).
    • Language learning gains among users of English Liulishuo

      Green, Anthony; O'Sullivan, Barry; LAIX (LAIX, 2019-02-26)
      This study investigated improvements in English language ability (as measured by the British Council Aptis test) among 746 users of the English Liulishuo app, the flagship mobile app produced by LAIX Inc. (NYSE:LAIX), taking courses at three levels over a period of approximately two months.
    • Language testing and validation: an evidence based approach

      Weir, Cyril J. (Palgrave, 2005-01-01)
      Tests for the measurement of language abilities must be constructed according to a coherent validity framework based on the latest developments in theory and practice. This innovative book, by a world authority on language testing, deals with all key aspects of language test design and implementation. It provides a road map to effective testing based on the latest approaches to test validation. A book for all MA students in Applied Linguistics or TESOL, and for professional language teachers
    • Learning oriented feedback in the development and assessment of interactional competence

      Nakatsuhara, Fumiyo; May, Lyn; Lam, Daniel M. K.; Galaczi, Evelina D.; Cambridge Assessment English; University of Bedfordshire; Queensland University of Technology (Cambridge Assessment English, 2018-01-01)
      This project developed practical tools to support the classroom assessment of learners’ interactional competence (IC) and provide learning-oriented feedback in the context of Cambridge English: First (now known as B2 First). To develop a checklist, accompanying descriptions and recommendations for teachers to use in providing feedback on learners’ interactional skills, 72 stimulated verbal reports were elicited from six trained examiners who were also experienced teachers. They produced verbal reports on 12 paired interactions with high, mid, and low interactive communication scores. The examiners were asked to comment on features of the interaction that influenced their rating of candidates’ IC and, based on the features of the performance they noted, provide feedback to candidates. The verbal reports were thematically analysed using Nvivo 11 to inform a draft checklist and materials, which were then trialled by four experienced teachers in order to further refine these resources. The final product comprises (a) a full IC checklist with nine main categories and over 50 sub-categories which further specify positive and negative aspects, accompanying detailed description of each area and feedback to learners, and (b) a concise version of the IC checklist with fewer categories and ‘bite-sized’ feedback to learners, to support use by teachers and learners in real-time. As such, this research addressed the area of meaningful feedback to learners on IC, which is an essential component of communicative language and yet cannot be effectively addressed via digital technologies and therefore needs substantial teacher involvement. This study, in line with the Cambridge English Learning Oriented Assessment (LOA) approach (e.g. Hamp-Lyons and Green 2014, Jones and Saville 2014, 2016), took the first step to offering teachers practical tools for feedback on learners’ interactional skills. Additionally, these tools have the potential to be integrated into the learning management system of the Empower course, aligning classroom and standardised assessment.
    • Learning-oriented language test preparation materials: a contradiction in terms?

      Green, Anthony; University of Bedfordshire (Association for Language Testing and Assessment of Australia and New Zealand (ALTAANZ), 2017-11-10)
      The impact of the use of assessment on teaching and learning is increasingly regarded as a key concern in evaluating assessment use. Realising intended forms of impact relies on more than the design of an assessment: account must also be taken of the ways in which teachers, learners and others understand the demands of the assessment and incorporate these into their practice. The measures that testing agencies take to present and explicate their tests to teachers and other stakeholders therefore play an important role in promoting intended impact and mitigating unintended, negative impact. Materials that support teachers in preparing learners to take tests (such as descriptions of the test, preparation materials and teacher training resources) play an important role in communicating the test providers’ intentions. In this study, these support materials are analysed. The selected materials, provided to teachers by Cambridge English Language Assessment, go with the Speaking component of a major international test of general English proficiency: Cambridge English: First. The study addresses how these materials might embody or reflect learning-oriented assessment principles of task authenticity, learner engagement and feedback within a coherent systemic theory of action, reconciling formative and summative assessment functions to the benefit of learning.
    • Linking tests of English for academic purposes to the CEFR: the score user’s perspective

      Green, Anthony (Taylor and Francis, 2017-11-13)
      The Common European Framework of Reference for Languages (CEFR) is widely used in setting language proficiency requirements, including for international students seeking access to university courses taught in English. When different language examinations have been related to the CEFR, the process is claimed to help score users, such as university admissions staff, to compare and evaluate these examinations as tools for selecting qualified applicants. This study analyses the linking claims made for four internationally recognised tests of English widely used in university admissions. It uses the Council of Europe’s (2009) suggested stages of specification, standard setting, and empirical validation to frame an evaluation of the extent to which, in this context, the CEFR has fulfilled its potential to “facilitate comparisons between different systems of qualifications.” Findings show that testing agencies make little use of CEFR categories to explain test content; represent the relationships between their tests and the framework in different terms; and arrive at conflicting conclusions about the correspondences between test scores and CEFR levels. This raises questions about the capacity of the CEFR to communicate competing views of a test construct within a coherent overarching structure.
    • Marking, rating scales and rubrics

      Green, Anthony (Cambridge University Press, 2012-04-01)
    • Measuring L2 speaking

      Nakatsuhara, Fumiyo; Inoue, Chihiro; Khabbazbashi, Nahal (Routledge, 2019-07-11)
      This chapter on measuring L2 speaking has three main focuses: (a) construct representation, (b) test methods and task design, and (c) scoring and feedback. We will briefly trace the different ways in which speaking constructs have been defined over the years and operationalized using different test methods and task features. We will then discuss the challenges and opportunities that speaking tests present for scoring and providing feedback to learners. We will link these discussions to the current understanding of SLA theories and empirical research, learning oriented assessment approaches and advances in educational technology.
    • The mediation and organisation of gestures in vocabulary instructions: a microgenetic analysis of interactions in a beginning-level adult ESOL classroom

      Tai, Kevin W.H.; Khabbazbashi, Nahal (Taylor & Francis, 2019-04-26)
      There is limited research on second language (L2) vocabulary teaching and learning which provides fine-grained descriptions of how vocabulary explanations (VE) are interactionally managed in beginning-level L2 classrooms where learners have a limited L2 repertoire, and how the VEs could contribute to the learners’ conceptual understanding of the meaning(s) of the target vocabulary items (VIs). To address these research gaps, we used a corpus of classroom video-data from a beginning-level adult ESOL classroom in the United States and applied Conversation Analysis to examine how the class teacher employs various gestural and linguistic resources to construct L2 VEs. We also conducted a 4-month microgenetic analysis to document qualitative changes in learners’ understanding of the meaning of specific L2 VIs which were previously explained by the teacher. Findings revealed that the learners’ use of gestures allows for an externalization of thinking processes providing visible output for inspection by the teacher and peers. These findings can inform educators’ understanding about L2 vocabulary development as a gradual process of controlling the right gestural and linguistic resources for appropriate communicative purposes.
    • The need for EAP teacher knowledge in assessment

      Schmitt, Diane; Hamp-Lyons, Liz; Nottingham Trent University; University of Bedfordshire (Elsevier Ltd, 2015-05-08)
    • A new test for China? Stages in the development of an assessment for professional purposes.

      Jin, Yan; Hamp-Lyons, Liz; Shanghai Jiao Tong University; University of Bedfordshire (Taylor & Francis, 2015-03-22)
      It is increasingly recognised that attention should be paid to investigating the needs of a new test, especially in contexts where specific purpose language needs might be identified. This article describes the stages involved in establishing the need for a new assessment of English for professional purposes in China. We first investigated stakeholders’ perceptions of the target language use activities and the necessity of the proposed assessment. We then analysed five existing tests and six language frameworks to evaluate their suitability for the need of the proposed assessment. The resulting proposal is for an advanced-level English assessment capable of providing a diagnostic evaluation of the proficiency of potential employees in areas of relevance to multinationals operating in China. The study has demonstrated the value of following a principled procedure to investigate the necessity for and the needs of a new test at the very beginning of the test development
    • Opening the black box: exploring automated speaking evaluation

      Khabbazbashi, Nahal; Xu, Jing; Galaczi, Evelina D. (Springer, 2021-02-10)
      The rapid advances in speech processing and machine learning technologies have attracted language testers’ strong interest in developing automated speaking assessment in which candidate responses are scored by computer algorithms rather than trained human examiners. Despite its increasing popularity, automatic evaluation of spoken language is still shrouded in mystery and technical jargon, often resembling an opaque "black box" that transforms candidate speech to scores in a matter of minutes. Our chapter explicitly problematizes this lack of transparency around test score interpretation and use and asks the following questions: What do automatically derived scores actually mean? What are the speaking constructs underlying them? What are some common problems encountered in automated assessment of speaking? And how can test users evaluate the suitability of automated speaking assessment for their proposed test uses? In addressing these questions, the purpose of our chapter is to explore the benefits, problems, and caveats associated with automated speaking assessment touching on key theoretical discussions on construct representation and score interpretation as well as practical issues such as the infrastructure necessary for capturing high quality audio and the difficulties associated with acquiring training data. We hope to promote assessment literacy by providing the necessary guidance for users to critically engage with automated speaking assessment, pose the right questions to test developers, and ultimately make informed decisions regarding the fitness for purpose of automated assessment solutions for their specific learning and assessment contexts.
    • Opposing tensions of local and international standards for EAP writing programmes: who are we assessing for?

      Bruce, Emma; Hamp-Lyons, Liz; City University of Hong Kong; University of Bedfordshire (Elsevier Ltd, 2015-04-24)
      In response to recent curriculum changes in secondary schools in Hong Kong including the implementation of the 3-3-4 education structure, with one year less at high school and one year more at university and the introduction of a new school leavers' exam, the Hong Kong Diploma of Secondary Education (HKDSE), universities in the territory have revisited their English language curriculums. At City University a new EAP curriculum and assessment framework was developed to fit the re-defined needs of the new cohort of students.In this paper we describe the development and benchmarking process of a scoring instrument for EAP writing assessment at City University. We discuss the opposing tensions of local (HKDSE) and international (CEFR and IELTS) standards, the problems of aligning EAP needs-based domain scales and standards with the CEFR and the issues associated with attempting to fulfil the institutional expectation that the EAP programme would raise students' scores by a whole CEFR scale step. Finally, we consider the political tensions created by the use of external, even international, reference points for specific levels of writing performance from all our students and suggest the benefits of a specific, locally-designed, fit-for-purpose tool over one aligned with universal standards.
    • Opposing tensions of local and international standards for EAP writing: programmes: who are we assessing for?

      Bruce, Emma Louise; Hamp-Lyons, Liz; City University of Hong Kong; University of Bedfordshire (Elsevier, 2015-04-24)
      In response to recent curriculum changes in secondary schools in Hong Kong including the implementation of the 3e3e4 education structure, with one year less at high school and one year more at university and the introduction of a new school leavers' exam, the Hong Kong Diploma of Secondary Education (HKDSE), universities in the territory have revisited their English language curriculums. At City University a new EAP curriculum and assessment framework was developed to fit the re-defined needs of the new cohort of students. In this paper we describe the development and benchmarking process of a scoring instrument for EAP writing assessment at City University. We discuss the opposing tensions of local (HKDSE) and international (CEFR and IELTS) standards, the problems of aligning EAP needs-based domain scales and standards with the CEFR and the issues associated with attempting to fulfil the institutional expectation that the EAP programme would raise students' scores by a whole CEFR scale step. Finally, we consider the political tensions created by the use of external, even international, reference points for specific levels of writing performance from all our students and suggest the benefits of a specific, locallydesigned, fit-for-purpose tool over one aligned with universal standards.
    • The origins and adaptations of English as a school subject

      Goodwyn, Andrew (Cambridge University Press, 2019-12-31)
      This chapter will consider the particular manifestation of English as a ‘school subject’, principally in the country called England and using some small space for significant international comparisons, and it will mainly focus on the secondary school version. We will call this phenomenon School Subject English (SSE). The chapter will argue that historically SSE has gone through phases of development and adaptation, some aspects of these changes inspired by new theories and concepts and by societal change, some others, especially more recently, entirely reactive to external impositions (for an analysis of the current position of SSE, see Roberts, this volume). This chapter considers SSE to have been ontologically ‘expanded’ between 1870 and (about) 1990, increasing the ambition and scope of the ‘subject’ and the emancipatory ideology of its teachers. This ontological expansion was principally a result of adding ‘models’ of SSE, models that each emphasise different epistemologies of what counts as significant knowledge, and can only exist in a dynamic tension. In relation to this volume, SSE has always incorporated close attention to language but only very briefly (1988–1992) has something akin to Applied Linguistics had any real influence in the secondary classroom. However, with varying emphasis historically, there has been attention (the Adult Needs/Skills model, see later) to the conventions of language, especially ‘secretarial’ issues of spelling and punctuation, some understanding of grammar, and a focus on notions of Standard English, in writing and in speech; but these have never been the driving ideology of SSE. Of the two conceptual giants ‘Language’ and ‘Literature’, it is the latter that has mattered most over those 120 years.
    • Paper-based vs computer-based writing assessment: divergent, equivalent or complementary?

      Chan, Sathena Hiu Chong (Elsevier, 2018-05-16)
      Writing on a computer is now commonplace in most post-secondary educational contexts and workplaces, making research into computer-based writing assessment essential. This special issue of Assessing Writing includes a range of articles focusing on computer-based writing assessments. Some of these have been designed to parallel an existing paper-based assessment, others have been constructed as computer-based from the beginning. The selection of papers addresses various dimensions of the validity of computer-based writing assessment use in different contexts and across levels of L2 learner proficiency. First, three articles deal with the impact of these two delivery modes, paper-baser-based or computer-based, on test takers’ processing and performance in large-scale high-stakes writing tests; next, two articles explore the use of online writing assessment in higher education; the final two articles evaluate the use of technologies to provide feedback to support learning.
    • Phatic communication and relevance theory: a reply to Ward & Horn

      Žegarac, Vladimir; Clark, Billy (Cambridge University Press, 1999-11-01)
      In Žegarac & Clark (1999) we try to show how phatic communication can be explained within the framework of Relevance Theory. We suggest that phatic communication should be characterized as a particular type of interpretation, which we call ‘phatic interpretation’. On our account, an interpretation is phatic to the extent that its main relevance lies with implicated conclusions which do not depend on the explicit content of the utterance, but rather on the communicative intention (where ‘depends on X’ means: ‘results from an inferential process which takes X as a premise’).
    • Phatic interpretations and phatic communication

      Žegarac, Vladimir; Clark, Billy (Cambridge University Press, 1999-07-01)
      This paper considers how the notion of phatic communication can best be understood within the framework of Relevance Theory. To a large extent, we are exploring a terminological question: which things which occur during acts of verbal communication should the term 'phatic' apply to? The term is perhaps most frequently used in the phrase 'phatic communication', which has been thought of as an essentially social phenomenon and therefore beyond the scope of cognitive pragmatic theories. We suggest, instead, that the term should be applied to interpretations and that an adequate account of phatic interpretations requires an account of the cognitive processes involved in deriving them. Relevance Theory provides the basis for such an account. In section 1, we indicate the range of phenomena to be explored. In section 2, we outline the parts of Relevance Theory which are used in our account. In section 3, we argue that the term 'phatic' should be applied to interpretations, and we explore predictions about phatic interpretations which follow from the framework of Relevance Theory, including the claim that phatic interpretations should be derived only when non-phatic interpretations are not consistent with the Principle of Relevance. In section 4 we consider cases where cognitive effects similar to those caused by phatic interpretations are conveyed but not ostensively communicated. © 1999 Cambridge University Press.
    • Placement testing

      Green, Anthony (TESOL International Association and Wiley, 2018-01-01)