• Exploring the potential for assessing interactional and pragmatic competence in semi-direct speaking tests

      Nakatsuhara, Fumiyo; May, Lyn; Inoue, Chihiro; Willcox-Ficzere, Edit; Westbrook, Carolyn; Spiby, Richard; University of Bedfordshire; Queensland University of Technology; Oxford Brookes University; British Council (British Council, 2021-11-11)
      To explore the potential of a semi-direct speaking test to assess a wider range of communicative language ability, the researchers developed four semi-direct speaking tasks – two designed to elicit features of interactional competence (IC) and two designed to elicit features of pragmatic competence (PC). The four tasks, as well as one benchmarking task, were piloted with 48 test-takers in China and Austria whose proficiency ranged from CEFR B1 to C. A post-test feedback survey was administered to all test-takers, after which selected test-takers were interviewed. A total of 184 task performances were analysed to identify interactional moves utilised by test-takers across three proficiency groups (i.e., B1, B2 and C). Data indicated that test-takers at higher levels employed a wider variety of interactional moves. They made use of concurring concessions and counter views when seeking to persuade a (hypothetical) conversational partner to change opinions in the IC tasks, and they projected upcoming requests and made face-related statements in the PC tasks, seemingly to pre-empt a conversational partner’s negative response to the request. The test-takers perceived the tasks to be highly authentic and found the video input useful in understanding the target audience of simulated interactions.
    • Opening the black box: exploring automated speaking evaluation

      Khabbazbashi, Nahal; Xu, Jing; Galaczi, Evelina D. (Springer, 2021-02-10)
      The rapid advances in speech processing and machine learning technologies have attracted language testers’ strong interest in developing automated speaking assessment in which candidate responses are scored by computer algorithms rather than trained human examiners. Despite its increasing popularity, automatic evaluation of spoken language is still shrouded in mystery and technical jargon, often resembling an opaque "black box" that transforms candidate speech to scores in a matter of minutes. Our chapter explicitly problematizes this lack of transparency around test score interpretation and use and asks the following questions: What do automatically derived scores actually mean? What are the speaking constructs underlying them? What are some common problems encountered in automated assessment of speaking? And how can test users evaluate the suitability of automated speaking assessment for their proposed test uses? In addressing these questions, the purpose of our chapter is to explore the benefits, problems, and caveats associated with automated speaking assessment touching on key theoretical discussions on construct representation and score interpretation as well as practical issues such as the infrastructure necessary for capturing high quality audio and the difficulties associated with acquiring training data. We hope to promote assessment literacy by providing the necessary guidance for users to critically engage with automated speaking assessment, pose the right questions to test developers, and ultimately make informed decisions regarding the fitness for purpose of automated assessment solutions for their specific learning and assessment contexts.