• Academic speaking: does the construct exist, and if so, how do we test it?

      Inoue, Chihiro; Nakatsuhara, Fumiyo; Lam, Daniel M. K.; Taylor, Lynda; University of Bedfordshire (2018-03-14)
    • An application of AUA to examining the potential washback of a new test of English for university entrance

      Nakamura, Keita; Green, Anthony; Eiken Foundation of Japan; University of Bedfordshire (2013-11-17)
    • Are current academic reading tests fit for purpose?

      Weir, Cyril J.; Chan, Sathena Hiu Chong; University of Bedfordshire (2018-03-14)
    • Aspects of fluency across assessed levels of speaking proficiency

      Tavakoli, Parveneh; Nakatsuhara, Fumiyo; Hunter, Ann-Marie (Wiley, 2020-01-25)
      Recent research in second language acquisition suggests that a number of speed, breakdown, repair and composite measures reliably assess fluency and predict proficiency. However, there is little research evidence to indicate which measures best characterize fluency at each assessed level of proficiency, and which can consistently distinguish one level from the next. This study investigated fluency in 32 speakers’ performing four tasks of the British Council’s Aptis Speaking test, which were awarded four different levels of proficiency (CEFR A2-C1). Using PRAAT, the performances were analysed for various aspects of utterance fluency across different levels of proficiency. The results suggest that speed and composite measures consistently distinguish fluency from the lowest to upper-intermediate levels (A2-B2), and many breakdown measures differentiate between the lowest level (A2) and the rest of the proficiency groups, with a few differentiating between lower (A2, B1) and higher levels (B2, C1). The varied use of repair measures at different levels suggest that a more complex process is at play. The findings imply that a detailed micro-analysis of fluency offers a more reliable understanding of the construct and its relationship with assessment of proficiency.
    • Assessing English on the global stage : the British Council and English language testing, 1941-2016

      Weir, Cyril J.; O'Sullivan, Barry (Equinox, 2017-07-06)
      This book tells the story of the British Council’s seventy-five year involvement in the field of English language testing. The first section of the book explores the role of the British Council in spreading British influence around the world through the export of British English language examinations and British expertise in language testing. Founded in 1934, the organisation formally entered the world of English language testing with the signing of an agreement with the University of Cambridge Local Examination Syndicate (UCLES) in 1941. This agreement, which was to last until 1993, saw the British Council provide substantial English as a Foreign Language (EFL) expertise and technical and financial assistance to help UCLES develop their suite of English language tests. Perhaps the high points of this phase were the British Council inspired Cambridge Diploma of English Studies introduced in the 1940s and the central role played by the British Council in the conceptualisation and development of the highly innovative English Language Testing Service (ELTS) in the 1970s, the precursor to the present day International English Language Testing System (IELTS). British Council support for the development of indigenous national English language tests around the world over the last thirty years further enhanced the promotion of English and the creation of soft power for Britain. In the early 1990s the focus of the British Council changed from test development to delivery of British examinations through its global network. However, by the early years of the 21st century, the organisation was actively considering a return to test development, a strategy that was realised with the founding of the Assessment Research Group in early 2012. This was followed later that year by the introduction of the Aptis English language testing service; the first major test developed in-house for over thirty years. As well as setting the stage for the re-emergence of professional expertise in language testing within the organisation, these initiatives have resulted in a growing strategic influence for the organisation on assessment in English language education. This influence derives from a commitment to test localisation, the development and provision of flexible, accessible and affordable tests and an efficient delivery, marking and reporting system underpinned by an innovative socio-cognitive approach to language testing. This final period can be seen as a clear return by the British Council to using language testing as a tool for enhancing soft power for Britain: a return to the original raison d’etre of the organisation.
    • Assessment for learning in language education

      Green, Anthony (Urmia University, 2018-10-01)
      This paper describes the growing interest in assessment for learning (AfL) approaches in language education. It explains the term, traces the origins of AfL in developments in general education and considers the evidence for its claimed impact on learning outcomes. The paper sets out some of the challenges involved in researching, implementing and evaluating AfL initiatives in the context of language teaching and learning and considers how this may impact on our field in the future.
    • Assessment literacy in practice

      Green, Anthony; University of Bedfordshire (2013-11-01)
    • Assessment of candidates' interactional competence using group oral tests

      Nakatsuhara, Fumiyo; University of Bedfordshire (2013-05-19)
    • Assessment of learning and assessment for learning

      Green, Anthony (TESOL International Association and Wiley, 2018-01-01)
    • Automated approaches to establishing context validity in reading tests

      Taylor, Lynda; Weir, Cyril J.; University of Bedfordshire (2012-06-03)
    • CEFR and ACTFL crosswalk: a text based approach

      Green, Anthony (Stauffenburg, 2012-01-01)
    • Cognitive validity in language testing: theory and practice

      Field, John; University of Bedfordshire (2012-07-05)
    • Cognitive validity in the testing of speaking

      Field, John; University of Bedfordshire (2013-11-17)
    • A comparison of holistic, analytic, and part marking models in speaking assessment

      Khabbazbashi, Nahal; Galaczi, Evelina D. (SAGE, 2020-01-24)
      This mixed methods study examined holistic, analytic, and part marking models (MMs) in terms of their measurement properties and impact on candidate CEFR classifications in a semi-direct online speaking test. Speaking performances of 240 candidates were first marked holistically and by part (phase 1). On the basis of phase 1 findings – which suggested stronger measurement properties for the part MM – phase 2 focused on a comparison of part and analytic MMs. Speaking performances of 400 candidates were rated analytically and by part during that phase. Raters provided open comments on their marking experiences. Results suggested a significant impact of MM; approximately 30% and 50% of candidates in phases 1 and 2 respectively were awarded different (adjacent) CEFR levels depending on the choice of MM used to assign scores. There was a trend of higher CEFR levels with the holistic MM and lower CEFR levels with the part MM. While strong correlations were found between all pairings of MMs, further analyses revealed important differences. The part MM was shown to display superior measurement qualities particularly in allowing raters to make finer distinctions between different speaking ability levels. These findings have implications for the scoring validity of speaking tests.
    • Computer delivered listening tests: a sad necessity or an opportunity?

      Field, John; University of Bedfordshire (2017-07-06)
    • CRELLA: its socio-cognitive approach to validating tests

      Nakatsuhara, Fumiyo; University of Bedfordshire (2013-05-19)
    • Defining integrated reading-into-writing constructs: evidence at the B2 C1 interface

      Chan, Sathena Hiu Chong (Cambridge University Press, 2018-06-01)