• Applying the socio-cognitive framework: gathering validity evidence during the development of a speaking test

      Nakatsuhara, Fumiyo; Dunlea, Jamie; University of Bedfordshire; British Council (UCLES/Cambridge University Press, 2020-06-18)
      This chapter describes how Weir’s (2005; further elaborated in Taylor (Ed) 2011) socio-cognitive framework for validating speaking tests guided two a priori validation studies of the speaking component of the Test of English for Academic Purposes (TEAP) in Japan. In this chapter, we particularly reflect upon the academic achievements of Professor Cyril J Weir, in terms of: • the effectiveness and value of the socio-cognitive framework underpinning the development of the TEAP Speaking Test while gathering empirical evidence of the construct underlying a speaking test for the target context • his contribution to developing early career researchers and extending language testing expertise in the TEAP development team.
    • Assessing speaking

      Nakatsuhara, Fumiyo; Khabbazbashi, Nahal; Inoue, Chihiro; University of Bedfordshire (Routledge, 2021-12-16)
      In this chapter on assessing speaking, the history of speaking assessment is briefly traced in terms of the various ways in which speaking constructs have been defined and diversified over the past century. This is followed by a discussion of elicitation tasks, test delivery modes, rating methods, and scales that offered opportunities and/or presented challenges in operationalising different constructs of speaking and providing feedback. Several methods utilised in researching speaking assessment are then considered. Informed by recent research and advances in technology, the chapter provides recommendations for practice in both high-stakes and low-stakes contexts.
    • Comparing rating modes: analysing live, audio, and video ratings of IELTS Speaking Test performances

      Nakatsuhara, Fumiyo; Inoue, Chihiro; Taylor, Lynda; (Taylor & Francis, 2020-08-26)
      This mixed methods study compared IELTS examiners’ scores when assessing spoken performances under live and two ‘non-live’ testing conditions using audio and video recordings. Six IELTS examiners assessed 36 test-takers’ performances under the live, audio, and video rating conditions. Scores in the three rating modes were calibrated using the many-facet Rasch model (MFRM). For all three modes, examiners provided written justifications for their ratings, and verbal reports were also collected to gain insights into examiner perceptions towards performance under the audio and video conditions. Results showed that, for all rating criteria, audio ratings were significantly lower than live and video ratings. Examiners noticed more negative performance features under the two non-live rating conditions, compared to the live condition. However, richer information about test-taker performance in the video mode appeared to cause raters to rely less on such negative evidence than audio raters when awarding scores. Verbal report data showed that having visual information in the video-rating mode helped examiners to understand what the test-takers were saying, to comprehend better what test-takers were communicating using non-verbal means, and to understand with greater confidence the source of test-takers’ hesitation, pauses, and awkwardness.
    • Comparing writing proficiency assessments used in professional medical registration: a methodology to inform policy and practice

      Chan, Sathena Hiu Chong; Taylor, Lynda; University of Bedfordshire (Elsevier, 2020-10-13)
      Internationally trained doctors wishing to register and practise in an English-speaking country typically have to demonstrate that they can communicate effectively in English, including writing proficiency. Various English language proficiency (ELP) tests are available worldwide and are used for such licensing purposes. This means that medical registration bodies face the question of which test(s) will meet their needs, ideally reflecting the demands of their professional environment. This article reports a mixed-methods study to survey the policy and practice of health-care registration organisations in the UK and worldwide. The study aimed to identify ELP tests that were, or could be, considered as suitable for medical registration purposes and to understand the differences between them. The paper discusses what the study revealed about the function and comparability of different writing tests used in professional registration as well as the complex criteria a professional body may prioritise when selecting a test. Although the original study was completed in 2015, the paper takes account of subsequent changes in policy and practice. It offers a practical methodology and worked example which may be of interest and value to other researchers, language testers and policymakers as they face challenges in selecting and making comparisons across tests.
    • The design and validation of an online speaking test for young learners in Uruguay: challenges and innovations

      Khabbazbashi, Nahal; Nakatsuhara, Fumiyo; Inoue, Chihiro; Kaplan, Gabriela; Green, Anthony; University of Bedfordshire; Plan Ceibal (Cranmore Publishing on behalf of the International TESOL Union, 2022-02-10)
      This research presents the development of an online speaking test of English for students at the end of primary and beginning of secondary school education in state schools in Uruguay. Following the success of the Plan Ceibal one computer-tablet per child initiative, there was a drive to further utilize technology to improve the language ability of students, particularly in speaking, where the majority of students are at CEFR levels pre-A1 and A1. The national concern over a lack of spoken communicative skills amongst students led to a decision to develop a new speaking test, specifically tailored to local needs. This paper provides an overview of the speaking test development and validation project designed with the following objectives in mind: to establish, track, and report annually learners’ achievements against the Common European Framework of Reference for Languages (CEFR) targeting CEFR levels pre-A1 to A2, to inform teaching and learning, and to promote speaking practice in classrooms. Results of a three-phase mixed-methods study involving small-scale and large-scale trials with learners and examiners as well as a CEFRlinking exercise with expert panelists will be reported. Different sources of evidence will be brought together to build a validity argument for the test. The paper will also focus on some of the challenges involved in assessing young learners and discuss how design decisions, local knowledge and expertise, and technological innovations can be used to address such challenges with implications for other similar test development projects.
    • The effects of extended planning time on candidates’ performance, processes and strategy use in the lecture listening-into-speaking tasks of the TOEFL iBT Test

      Inoue, Chihiro; Lam, Daniel M. K.; Educational Testing Service (Wiley, 2021-06-21)
      This study investigated the effects of two different planning time conditions (i.e., operational [20 s] and extended length [90 s]) for the lecture listening-into-speaking tasks of the TOEFL iBT® test for candidates at different proficiency levels. Seventy international students based in universities and language schools in the United Kingdom (35 at a lower level; 35 at a higher level) participated in the study. The effects of different lengths of planning time were examined in terms of (a) the scores given by ETS-certified raters; (b) the quality of the speaking performances characterized by accurately reproduced idea units and the measures of complexity, accuracy, and fluency; and (c) self-reported use of cognitive and metacognitive processes and strategies during listening, planning, and speaking. The results found neither a statistically significant main effect of the length of planning time nor an interaction between planning time and proficiency on the scores or on the quality of the speaking performance. There were several cognitive and metacognitive processes and strategies where significantly more engagement was reported under the extended planning time, which suggests enhanced cognitive validity of the task. However, the increased engagement in planning did not lead to any measurable improvement in the score. Therefore, in the interest of practicality, the results of this study provide justifications for the operational length of planning time for the lecture listening-into-speaking tasks in the speaking section of the TOEFL iBT test.
    • Exploring the potential for assessing interactional and pragmatic competence in semi-direct speaking tests

      Nakatsuhara, Fumiyo; May, Lyn; Inoue, Chihiro; Willcox-Ficzere, Edit; Westbrook, Carolyn; Spiby, Richard; University of Bedfordshire; Queensland University of Technology; Oxford Brookes University; British Council (British Council, 2021-11-11)
      To explore the potential of a semi-direct speaking test to assess a wider range of communicative language ability, the researchers developed four semi-direct speaking tasks – two designed to elicit features of interactional competence (IC) and two designed to elicit features of pragmatic competence (PC). The four tasks, as well as one benchmarking task, were piloted with 48 test-takers in China and Austria whose proficiency ranged from CEFR B1 to C. A post-test feedback survey was administered to all test-takers, after which selected test-takers were interviewed. A total of 184 task performances were analysed to identify interactional moves utilised by test-takers across three proficiency groups (i.e., B1, B2 and C). Data indicated that test-takers at higher levels employed a wider variety of interactional moves. They made use of concurring concessions and counter views when seeking to persuade a (hypothetical) conversational partner to change opinions in the IC tasks, and they projected upcoming requests and made face-related statements in the PC tasks, seemingly to pre-empt a conversational partner’s negative response to the request. The test-takers perceived the tasks to be highly authentic and found the video input useful in understanding the target audience of simulated interactions.
    • Eye-tracking L2 students taking online multiple-choice reading tests: benefits and challenges

      Latimer, Nicola; Chan, Sathena Hiu Chong (Cranmore Publishing, 2022-04-10)
      Recently, there has been a marked increase in language testing research involving eye-tracking. It appears to offer a useful methodology for examining cognitive validity in language tests, i.e., the extent to which the mental processes that a language test elicits from test takers resemble those that they would employ in the target language use domains. This article reports on a recent study which examined reading processes of test takers at different proficiency levels on a reading proficiency test. Using a mixed-methods approach, the study collected cognitive validity evidence through eye-tracking and stimulated recall interviews. The study investigated whether there are differences in reading behaviour among test takers at CEFR B1, B2 and C1 levels on an online reading task. The main findings are reported and the implications of the findings are discussed to reflect on some fundamental questions regarding the use of eye-tracking in language testing research.
    • On topic validity in speaking tests

      Khabbazbashi, Nahal; University of Bedfordshire (Cambridge University Press, 2021-11-22)
      Topics are often used as a key speech elicitation method in performance-based assessments of spoken language, and yet the validity and fairness issues surrounding topics are surprisingly under-researched. Are different topics ‘equivalent’ or ‘parallel’? Can some topics bias against or favour individuals or groups of individuals? Does background knowledge of topics have an impact on performance? Might the content of test taker speech affect their scores – and perhaps more importantly, should it? Grounded in the real-world assessment context of IELTS, this volume draws on original data as well as insights from empirical and theoretical research to address these questions against the backdrop of one of the world’s most high-stakes language tests. This volume provides: * an up-to-date review of theoretical and empirical literature related to topic and background knowledge effects on second language performance * an accessible and systematic description of a mixed methods research study with explanations of design, analysis, and interpretation considerations at every stage * a comprehensive and coherent approach for building a validity argument in a given assessment context. The volume also contributes to critiques of recent models of communicative competence with an over-reliance on linguistic features at the expense of more complex aspects of communication, by arguing for an expansion of current definitions of the speaking construct emphasising the role of content of speech as an important – yet often neglected – feature.
    • Opening the black box: exploring automated speaking evaluation

      Khabbazbashi, Nahal; Xu, Jing; Galaczi, Evelina D. (Springer, 2021-02-10)
      The rapid advances in speech processing and machine learning technologies have attracted language testers’ strong interest in developing automated speaking assessment in which candidate responses are scored by computer algorithms rather than trained human examiners. Despite its increasing popularity, automatic evaluation of spoken language is still shrouded in mystery and technical jargon, often resembling an opaque "black box" that transforms candidate speech to scores in a matter of minutes. Our chapter explicitly problematizes this lack of transparency around test score interpretation and use and asks the following questions: What do automatically derived scores actually mean? What are the speaking constructs underlying them? What are some common problems encountered in automated assessment of speaking? And how can test users evaluate the suitability of automated speaking assessment for their proposed test uses? In addressing these questions, the purpose of our chapter is to explore the benefits, problems, and caveats associated with automated speaking assessment touching on key theoretical discussions on construct representation and score interpretation as well as practical issues such as the infrastructure necessary for capturing high quality audio and the difficulties associated with acquiring training data. We hope to promote assessment literacy by providing the necessary guidance for users to critically engage with automated speaking assessment, pose the right questions to test developers, and ultimately make informed decisions regarding the fitness for purpose of automated assessment solutions for their specific learning and assessment contexts.
    • Placing construct definition at the heart of assessment: research, design and a priori validation

      Chan, Sathena Hiu Chong; Latimer, Nicola (Cambridge University Press, 2020-04-01)
      In this chapter, we will first highlight Professor Cyril Weir’s major research into the nature of academic reading. Using one of his test development pro- jects as an example, we will describe how the construct of academic reading was operationalised in the local context of a British university by theoretical construct definition together with empirical analyses of students’ reading patterns on the test through eye-tracking. As we progress through the chapter we reflect on how Weir’s various research projects fed into the development of the test and a new method of analysing eye-tracking data in relation to different types of reading.
    • Research and practice in assessing academic reading: the case of IELTS

      Weir, Cyril J.; Chan, Sathena Hiu Chong (Cambridge University Press, 2019-08-29)
      The focus for attention in this volume is the reading component of the IELTS Academic module, which is principally used for admissions purposes into ter- tiary-level institutions throughout the world (see Davies 2008 for a detailed history of the developments in EAP testing leading up to the current IELTS). According to the official website (www.cambridgeenglish.org/exams-and- tests/ielts/test-format/), there are three reading passages in the Academic Reading Module with a total of c.2,150–2,750 words. Individual tasks are not timed. Texts are taken from journals, magazines, books, and newspapers. All the topics are of general interest and the texts have been written for a non-specialist audience. The readings are intended to be about issues that are appropriate to candidates who will enter postgraduate or undergraduate courses. At least one text will contain detailed logical argument. One of the texts may contain non-verbal materials such as graphs, illustrations or diagrams. If there are technical terms, which candidates may not know in the text, then a glossary is provided. The texts and questions become more difficult through the paper. A number of specific critical questions are addressed in applying the socio- cognitive validation framework to the IELTS Academic Reading Module: * Are the cognitive processes required to complete the IELTS Reading test tasks appropriate and adequate in their coverage? (Focus on cognitive validity in Chapter 4.) * Are the contextual characteristics of the test tasks and their administration appropriate and fair to the candidates who are taking them? (Focus on context validity in Chapter 5.) * What effects do the test and test scores have on various stakeholders? (Focus on consequential validity in Chapter 6.) * What external evidence is there that the test is fair? (Focus on criterion- related validity in Chapter 7.)
    • Towards the new construct of academic English in the digital age

      Khabbazbashi, Nahal; Chan, Sathena Hiu Chong; Clark, Tony; University of Bedfordshire; Cambridge University Press and Assessment (Oxford University Press, 2022-03-28)
      The increasing use of digital educational technologies in Higher Education (HE) means that the nature of communication may be shifting. Assessments of English for Academic Purposes (EAP) need to be reconceptualised accordingly, to reflect the new and complex ways in which language is used in HE. With a view to inform EAP assessments, our study set out to identify key trends related to Academic English using a scoping review of the literature. Findings revealed two major trends: (a) a shift towards multimodal communication which has in turn resulted in the emergence of new types of academic assignments, multimodal genres, and the need for students to acquire new skills to operate within this multimodal arena; and (b) the limitations of existing skills-based approaches to assessment and the need to move towards integrated skills assessment. We discuss the implications of these findings for EAP assessments.
    • Validation of a large-scale task-based test: functional progression in dialogic speaking performance

      Inoue, Chihiro; Nakatsuhara, Fumiyo (Springer Nature, 2022-02-07)
      A list of language functions is usually included in task-based speaking test specifications as a useful tool to describe target output language of test-takers, to define TLU domains, and to specify task demands. Such lists are, however, often constructed intuitively and they also tend to focus solely on the types of function to be elicited and ignore the ways in which each function is realised across different levels of proficiency (Green, 2012). The study reported in this chapter is a part of a larger-scale test revision project for Trinity’s Integrated Skills in English (ISE) spoken examinations. Analysing audio-recordings of 32 performances on the ISE spoken examination both quantitatively and qualitatively, the aims of this study are (a) to empirically validate lists of language functions in the test specifications of the operational, large-scale, task-based examinations, (b) to explore the usefulness and potential of function analysis as a test task validation method, and (c) to contribute to a better understanding of varied test-taker language that is used to generate language functions.
    • Video-conferencing speaking tests: do they measure the same construct as face-to-face tests?

      Nakatsuhara, Fumiyo; Inoue, Chihiro; Berry, Vivien; Galaczi, Evelina D.; ; University of Bedfordshire; British Council; Cambridge Assessment English (Routledge, 2021-08-23)
      This paper investigates the comparability between the video-conferencing and face-to-face modes of the IELTS Speaking Test in terms of scores and language functions generated by test-takers. Data were collected from 10 trained IELTS examiners and 99 test-takers who took two speaking tests under face-to-face and video-conferencing conditions. Many-facet Rasch Model (MFRM) analysis of test scores indicated that the delivery mode did not make any meaningful difference to test-takers’ scores. An examination of language functions revealed that both modes equally elicited the same language functions except asking for clarification. More test-takers made clarification requests in the video-conferencing mode (63.3%) than in the face-to-face mode (26.7%). Drawing on the findings, as well as practical implications, we extend emerging thinking about video-conferencing speaking assessment and the associated features of this modality in its own right.