Now showing items 1-20 of 182

    • Assessing second language pronunciation: a reference guide

      Jones, Johnathan; Isaacs, Talia (Springer, 2022-01-14)
      Pronunciation assessment (PA) is a resurgent subfield within applied linguistics that traverses the domains of psycholinguistics, second language acquisition (SLA), speech sciences, sociolinguistics, and more recently, computational linguistics. Though the terms ‘pronunciation’ and ‘assessment’ are sometimes defined in different ways by different authors, here we regard pronunciation as the vocal articulation of consonants and vowels (segmentals) combined with aspects of oral speech that extend beyond individual sounds, including stress, rhythm and intonation (suprasegmentals).
    • Towards more valid scoring criteria for integrated reading-writing and listening-writing summary tasks

      Chan, Sathena Hiu Chong; May, Lyn; (SAGE, 2022-11-05)
      Despite the increased use of integrated tasks in high-stakes academic writing assessment, research on rating criteria which reflect the unique construct of integrated summary writing skills is comparatively rare. Using a mixed-method approach of expert judgement, text analysis and statistical analysis, the current study examines writing features that discriminate summaries produced by 150 candidates at five levels of proficiency on integrated reading-writing (R-W) and listening-writing (LW) tasks. The expert judgement revealed a wide range of features which discriminated R-W and L-W responses. When responses at five proficiency levels were coded by these features, significant differences were obtained in seven features, including relevance of ideas, paraphrasing skills, accuracy of source information, academic style, language control, coherence and cohesion and task fulfilment across proficiency levels on the R-W task. The same features did not yield significant differences in L-W responses across proficiency levels. The findings have important implications for clarifying the construct of integrated summary writing in different modalities, indicating the possibility of expanding integrated rating categories with some potential for translating the identified criteria into automated rating systems. The results on the L-W indicate the need for developing descriptors which can more effectively discriminate L-W responses.
    • Assessing interactional competence: exploring ratability challenges

      Lam, Daniel M. K.; Galaczi, Evelina D.; Nakatsuhara, Fumiyo; May, Lyn; University of Glasgow; Cambridge University Press and Assessment; University of Bedfordshire; Queensland University of Technology (John Benjamins, 2022-10-25)
      This paper is positioned at the interface of second/foreign language (L2) assessment and Conversation Analysis-Second Language Acquisition (CA-SLA). It explores challenges of ratability in assessing interactional competence (IC) from three dimensions: an overview of the conceptual and terminological convergence/divergence in the CA-SLA and L2 assessment literature, a micro-analytic Conversation Analysis of test-taker interactions, and the operationalisation of IC construct features in rating scales across assessment contexts. It draws insights from these dimensions into a discussion of the nature of the IC construct and the challenges of IC ratability, and concludes with suggestions on ways in which insights from CA research can contribute to addressing these issues.
    • Report to the Nursing and Midwifery Council on language testing policy

      Green, Anthony; Chan, Sathena Hiu Chong; University of Bedfordshire (Nursing and Midwifery Council, 2022-09-28)
      Responding to the NMC’s review of its language testing policy, our project involved: • A review of the extent to which the approach to language testing currently adopted by the NMC is proportionate and appropriate, and • Recommendations for a methodology to investigate whether language tests of interest should be accepted by the NMC to be met.
    • Integrated writing and its correlates: a meta-analysis

      Chan, Sathena Hiu Chong; Yamashita, J. (Elsevier, 2022-07-26)
      Integrated tasks are increasing in popularity, either replacing or complementing writing- only independent tasks in writing assessments. This shift has generated many research interests to investigate the underlying construct and features of integrated writing (IW) performances. However, due to the complexity of the IW construct, there are conflicting findings about whether and the extent to which various language skills and IW text features correlate to IW scores. To understand the construct of IW, we conducted a meta-analysis to synthesize correlation coefficients between scores of IW performances and (1) other language skills and (2) text quality features of IW. We also examined factors that may moderate the correlation of IW scores with these two groups of correlates. Consequently, (1)reading and writing skills showed stronger correlations than listening to IW scores; and (2) text length had a strongest correlation, followed by source integration, organization and syntactic complexity, with a smallest correlation of lexical complexity. Several IW task features affected the magnitude of correlations. The results supported the view that IW is an independent construct, albeit related, from other language skills and IW task features may affect the construct of IW.
    • Book review: Assessing speaking in context: expanding the construct and its applications

      Taylor, Lynda (SAGE, 2022-02-16)
      review of Salaberry MR, Burch AR (2021) Assessing speaking in context: expanding the construct and its applications, Bristol: Multilingual Matters, ISBN 9781788923804
    • Validation of a large-scale task-based test: functional progression in dialogic speaking performance

      Inoue, Chihiro; Nakatsuhara, Fumiyo (Springer Nature, 2022-02-07)
      A list of language functions is usually included in task-based speaking test specifications as a useful tool to describe target output language of test-takers, to define TLU domains, and to specify task demands. Such lists are, however, often constructed intuitively and they also tend to focus solely on the types of function to be elicited and ignore the ways in which each function is realised across different levels of proficiency (Green, 2012). The study reported in this chapter is a part of a larger-scale test revision project for Trinity’s Integrated Skills in English (ISE) spoken examinations. Analysing audio-recordings of 32 performances on the ISE spoken examination both quantitatively and qualitatively, the aims of this study are (a) to empirically validate lists of language functions in the test specifications of the operational, large-scale, task-based examinations, (b) to explore the usefulness and potential of function analysis as a test task validation method, and (c) to contribute to a better understanding of varied test-taker language that is used to generate language functions.
    • Eye-tracking L2 students taking online multiple-choice reading tests: benefits and challenges

      Latimer, Nicola; Chan, Sathena Hiu Chong (Cranmore Publishing, 2022-04-10)
      Recently, there has been a marked increase in language testing research involving eye-tracking. It appears to offer a useful methodology for examining cognitive validity in language tests, i.e., the extent to which the mental processes that a language test elicits from test takers resemble those that they would employ in the target language use domains. This article reports on a recent study which examined reading processes of test takers at different proficiency levels on a reading proficiency test. Using a mixed-methods approach, the study collected cognitive validity evidence through eye-tracking and stimulated recall interviews. The study investigated whether there are differences in reading behaviour among test takers at CEFR B1, B2 and C1 levels on an online reading task. The main findings are reported and the implications of the findings are discussed to reflect on some fundamental questions regarding the use of eye-tracking in language testing research.
    • Assessing speaking

      Nakatsuhara, Fumiyo; Khabbazbashi, Nahal; Inoue, Chihiro; University of Bedfordshire (Routledge, 2021-12-16)
      In this chapter on assessing speaking, the history of speaking assessment is briefly traced in terms of the various ways in which speaking constructs have been defined and diversified over the past century. This is followed by a discussion of elicitation tasks, test delivery modes, rating methods, and scales that offered opportunities and/or presented challenges in operationalising different constructs of speaking and providing feedback. Several methods utilised in researching speaking assessment are then considered. Informed by recent research and advances in technology, the chapter provides recommendations for practice in both high-stakes and low-stakes contexts.
    • Towards the new construct of academic English in the digital age

      Khabbazbashi, Nahal; Chan, Sathena Hiu Chong; Clark, Tony; University of Bedfordshire; Cambridge University Press and Assessment (Oxford University Press, 2022-03-28)
      The increasing use of digital educational technologies in Higher Education (HE) means that the nature of communication may be shifting. Assessments of English for Academic Purposes (EAP) need to be reconceptualised accordingly, to reflect the new and complex ways in which language is used in HE. With a view to inform EAP assessments, our study set out to identify key trends related to Academic English using a scoping review of the literature. Findings revealed two major trends: (a) a shift towards multimodal communication which has in turn resulted in the emergence of new types of academic assignments, multimodal genres, and the need for students to acquire new skills to operate within this multimodal arena; and (b) the limitations of existing skills-based approaches to assessment and the need to move towards integrated skills assessment. We discuss the implications of these findings for EAP assessments.
    • The design and validation of an online speaking test for young learners in Uruguay: challenges and innovations

      Khabbazbashi, Nahal; Nakatsuhara, Fumiyo; Inoue, Chihiro; Kaplan, Gabriela; Green, Anthony; University of Bedfordshire; Plan Ceibal (Cranmore Publishing on behalf of the International TESOL Union, 2022-02-10)
      This research presents the development of an online speaking test of English for students at the end of primary and beginning of secondary school education in state schools in Uruguay. Following the success of the Plan Ceibal one computer-tablet per child initiative, there was a drive to further utilize technology to improve the language ability of students, particularly in speaking, where the majority of students are at CEFR levels pre-A1 and A1. The national concern over a lack of spoken communicative skills amongst students led to a decision to develop a new speaking test, specifically tailored to local needs. This paper provides an overview of the speaking test development and validation project designed with the following objectives in mind: to establish, track, and report annually learners’ achievements against the Common European Framework of Reference for Languages (CEFR) targeting CEFR levels pre-A1 to A2, to inform teaching and learning, and to promote speaking practice in classrooms. Results of a three-phase mixed-methods study involving small-scale and large-scale trials with learners and examiners as well as a CEFRlinking exercise with expert panelists will be reported. Different sources of evidence will be brought together to build a validity argument for the test. The paper will also focus on some of the challenges involved in assessing young learners and discuss how design decisions, local knowledge and expertise, and technological innovations can be used to address such challenges with implications for other similar test development projects.
    • On topic validity in speaking tests

      Khabbazbashi, Nahal; University of Bedfordshire (Cambridge University Press, 2021-11-22)
      Topics are often used as a key speech elicitation method in performance-based assessments of spoken language, and yet the validity and fairness issues surrounding topics are surprisingly under-researched. Are different topics ‘equivalent’ or ‘parallel’? Can some topics bias against or favour individuals or groups of individuals? Does background knowledge of topics have an impact on performance? Might the content of test taker speech affect their scores – and perhaps more importantly, should it? Grounded in the real-world assessment context of IELTS, this volume draws on original data as well as insights from empirical and theoretical research to address these questions against the backdrop of one of the world’s most high-stakes language tests. This volume provides: * an up-to-date review of theoretical and empirical literature related to topic and background knowledge effects on second language performance * an accessible and systematic description of a mixed methods research study with explanations of design, analysis, and interpretation considerations at every stage * a comprehensive and coherent approach for building a validity argument in a given assessment context. The volume also contributes to critiques of recent models of communicative competence with an over-reliance on linguistic features at the expense of more complex aspects of communication, by arguing for an expansion of current definitions of the speaking construct emphasising the role of content of speech as an important – yet often neglected – feature.
    • The effects of extended planning time on candidates’ performance, processes and strategy use in the lecture listening-into-speaking tasks of the TOEFL iBT Test

      Inoue, Chihiro; Lam, Daniel M. K.; Educational Testing Service (Wiley, 2021-06-21)
      This study investigated the effects of two different planning time conditions (i.e., operational [20 s] and extended length [90 s]) for the lecture listening-into-speaking tasks of the TOEFL iBT® test for candidates at different proficiency levels. Seventy international students based in universities and language schools in the United Kingdom (35 at a lower level; 35 at a higher level) participated in the study. The effects of different lengths of planning time were examined in terms of (a) the scores given by ETS-certified raters; (b) the quality of the speaking performances characterized by accurately reproduced idea units and the measures of complexity, accuracy, and fluency; and (c) self-reported use of cognitive and metacognitive processes and strategies during listening, planning, and speaking. The results found neither a statistically significant main effect of the length of planning time nor an interaction between planning time and proficiency on the scores or on the quality of the speaking performance. There were several cognitive and metacognitive processes and strategies where significantly more engagement was reported under the extended planning time, which suggests enhanced cognitive validity of the task. However, the increased engagement in planning did not lead to any measurable improvement in the score. Therefore, in the interest of practicality, the results of this study provide justifications for the operational length of planning time for the lecture listening-into-speaking tasks in the speaking section of the TOEFL iBT test.
    • Exploring the potential for assessing interactional and pragmatic competence in semi-direct speaking tests

      Nakatsuhara, Fumiyo; May, Lyn; Inoue, Chihiro; Willcox-Ficzere, Edit; Westbrook, Carolyn; Spiby, Richard; University of Bedfordshire; Queensland University of Technology; Oxford Brookes University; British Council (British Council, 2021-11-11)
      To explore the potential of a semi-direct speaking test to assess a wider range of communicative language ability, the researchers developed four semi-direct speaking tasks – two designed to elicit features of interactional competence (IC) and two designed to elicit features of pragmatic competence (PC). The four tasks, as well as one benchmarking task, were piloted with 48 test-takers in China and Austria whose proficiency ranged from CEFR B1 to C. A post-test feedback survey was administered to all test-takers, after which selected test-takers were interviewed. A total of 184 task performances were analysed to identify interactional moves utilised by test-takers across three proficiency groups (i.e., B1, B2 and C). Data indicated that test-takers at higher levels employed a wider variety of interactional moves. They made use of concurring concessions and counter views when seeking to persuade a (hypothetical) conversational partner to change opinions in the IC tasks, and they projected upcoming requests and made face-related statements in the PC tasks, seemingly to pre-empt a conversational partner’s negative response to the request. The test-takers perceived the tasks to be highly authentic and found the video input useful in understanding the target audience of simulated interactions.
    • Use of innovative technology in oral language assessment

      Nakatsuhara, Fumiyo; Berry, Vivien; ; University of Bedfordshire; British Council (Taylor & Francis, 2021-12-27)
      Editorial
    • Video-conferencing speaking tests: do they measure the same construct as face-to-face tests?

      Nakatsuhara, Fumiyo; Inoue, Chihiro; Berry, Vivien; Galaczi, Evelina D.; ; University of Bedfordshire; British Council; Cambridge Assessment English (Routledge, 2021-08-23)
      This paper investigates the comparability between the video-conferencing and face-to-face modes of the IELTS Speaking Test in terms of scores and language functions generated by test-takers. Data were collected from 10 trained IELTS examiners and 99 test-takers who took two speaking tests under face-to-face and video-conferencing conditions. Many-facet Rasch Model (MFRM) analysis of test scores indicated that the delivery mode did not make any meaningful difference to test-takers’ scores. An examination of language functions revealed that both modes equally elicited the same language functions except asking for clarification. More test-takers made clarification requests in the video-conferencing mode (63.3%) than in the face-to-face mode (26.7%). Drawing on the findings, as well as practical implications, we extend emerging thinking about video-conferencing speaking assessment and the associated features of this modality in its own right.
    • Preparing for admissions tests in English

      Yu, Guoxing; Green, Anthony; University of Bristol; University of Bedfordshire (Taylor & Francis, 2021-05-06)
      Test preparation for admissions to education programmes has always been a contentious issue (Anastasi, 1981; Crocker, 2003; Messick, 1982; Powers, 2012). For Crocker (2006), ‘No activity in educational assessment raises more instructional, ethical, and validity issues than preparation for large-scale, high-stakes tests.’ (p. 115). Debate has often centred around the effectiveness of preparation and how it affects the validity of test score interpretations; equity and fairness of access to opportunity; and impacts on learning and teaching (Yu et al., 2017). A focus has often been preparation for tests originally designed for domestic students, for example, SATs (e.g., Alderman & Powers, 1980; Appelrouth et al., 2017; Montgomery & Lilly, 2012; Powers, 1993; Powers & Rock, 1999; Sesnowitz et al., 1982) and state-wide tests (e.g., Firestone et al., 2004; Jäger et al., 2012), but the increasing internationalisation of higher education has added a new dimension. To enrol in higher education programmes which use English as the medium of instruction, increasing numbers of international students whose first language is not English are now taking English language tests, or academic specialist tests administered in English, or both. The papers in this special issue concern how students prepare for these tests and the roles in this process of the tests themselves and of the organisations that provide them.
    • Towards new avenues for the IELTS Speaking Test: insights from examiners’ voices

      Inoue, Chihiro; Khabbazbashi, Nahal; Lam, Daniel M. K.; Nakatsuhara, Fumiyo (IELTS Partners, 2021-02-19)
      This study investigated the examiners’ views on all aspects of the IELTS Speaking Test, namely, the test tasks, topics, format, interlocutor frame, examiner guidelines, test administration, rating, training and standardisation, and test use. The overall trends of the examiners’ views of these aspects of the test were captured by a large-scale online questionnaire, to which a total of 1203 examiners responded. Based on the questionnaire responses, 36 examiners were carefully selected for subsequent interviews to explore the reasons behind their views in depth. The 36 examiners were representative of a number of differing geographical regions and a range of views and experiences in examining and giving examiner training. While the questionnaire responses exhibited generally positive views from examiners on the current IELTS Speaking Test, the interview responses uncovered various issues that the examiners experienced and suggested potentially beneficial modifications. Many of the issues (e.g. potentially unsuitable topics, rigidity of interlocutor frames) were attributable to the huge candidature of the IELTS Speaking Test, which has vastly expanded since the test’s last revision in 2001, perhaps beyond the initial expectations of the IELTS Partners. This study synthesized the voices from examiners and insights from relevant literature, and incorporated guidelines checks we submitted to the IELTS Partners. This report concludes with a number of suggestions for potential changes in the current IELTS Speaking Test, so as to enhance its validity and accessibility in today’s ever globalising world.
    • Exploring language assessment and testing: language in action

      Green, Anthony (Routledge, 2020-12-30)
      Exploring Language Assessment and Testing offers a straightforward and accessible introduction that starts from real-world experiences and uses practical examples to introduce the reader to the academic field of language assessment and testing. Extensively updated, with additional features such as reader tasks (with extensive commentaries from the author), a glossary of key terms and an annotated further reading section, this second edition provides coverage of recent theoretical and technological developments and explores specific purposes for assessment. Including concrete models and examples to guide readers into the relevant literature, this book also offers practical guidance for educators and researchers on designing, developing and using assessments. Providing an inclusive and impartial survey of both classroom-based assessment by teachers and larger-scale testing, this is an indispensable introduction for postgraduate and advanced undergraduate students studying Language Education, Applied Linguistics and Language Assessment.
    • Don't turn a deaf ear: a case for assessing interactive listening

      Lam, Daniel M. K.; ; University of Bedfordshire (Oxford University Press, 2021-01-11)
      The reciprocal nature of spoken interaction means that participants constantly alternate between speaker and listener roles. However, listener or recipient actions – also known as interactive listening (IL) – are somewhat underrepresented in language tests. In conventional listening tests, they are not directly assessed. In speaking tests, they have often been overshadowed by an emphasis on production features or subsumed under broader constructs such as interactional competence. This paper is an effort to represent the rich IL phenomena that can be found in peer interactive speaking assessments, where the candidate-candidate format and discussion task offer opportunities to elicit and assess IL. Taking a close look at candidate discourse and non-verbal actions through a conversation analytic approach, the analysis focuses on three IL features: 1) listenership displays, 2) contingent responses, and 3) collaborative completions, and unpacks their relative strength in evidencing listener understanding. This paper concludes by making a case for revisiting the role of interactive listening, calling for more explicit inclusion of IL in L2 assessment as well as pedagogy.