• Accommodation in language testing

      Taylor, Lynda; University of Bedfordshire (Cambridge University Press, 2012-01)
    • Adapting or developing source material for listening and reading tests

      Green, Anthony (Wiley Blackwell, 2013)
      The ability to understand spoken or written language cannot be observed directly but must be inferred. In tests of reading and listening, test takers are given input in the form of texts or recordings of spoken language and are asked to perform tasks as evidence of their comprehension. This chapter traces how the choice of texts or recordings for use in such tests has been shaped by trends in language education. The last century saw a decisive movement away from translation and reading aloud toward the use of comprehension questions as evidence of understanding. Considerations in selecting and preparing material are outlined. Methods that have been used by developers to gauge the difficulty of texts and recordings are described. The role of item writers in shaping or adapting material for use in tests is discussed, and predictions are made about future developments, including a growing role for technology in the selection of material.
    • Applying a cognitive processing model to Main Suite Reading papers.

      Weir, Cyril J.; Khalifa, Hanan (University of Cambridge, 2008-02)
    • Are two heads better than one? pair work in L2 assessment contexts

      Taylor, Lynda; Wigglesworth, Gillian (Sage Publications, 2009-07)
    • Assessing English: a trial collaborative standardised marking project

      Gibbons, Simon; Marshall, Bethan; King's College London (University of Waikato, 2010-12)
      Recent policy developments in England have, to some extent, relaxed the hold of external, high-stakes assessment on teachers of students in the early years of secondary education. In such a context, there is the opportunity for teachers to reassert the importance of teacher assessment as the most reliable means of judging a student’s abilities. A recent project jointly undertaken by the National Association for the Teaching of English (NATE) and the Centre for Evaluation and Monitoring (CEM) was one attempt to trial a model for the collaborative standardised assessment of students’ writing. This article puts this project in the context of previous assessment initiatives in English and suggests that, given recent policy developments, now may be precisely the time for the profession to seek to be proactive in setting the assessment agenda.
    • Assessing health professionals

      Taylor, Lynda; Pill, John; University of Bedfordshire; University of Melbourne (Wiley, 2013-11)
      Language tests are used in the evaluation of migrating health professionals’ readiness to practise safely and effectively. Such assessment is complex, involving policy and practice alongside questions of a moral and ethical nature. The chapter focuses on English language assessment of doctors—referred to as international medical graduates (IMGs)—to exemplify issues arising for all health professionals in any language. The initial section describes differing approaches to language assessment used in various jurisdictions internationally: the UK, Australia, and the USA. The next section links this assessment policy and practice to theoretical insights and research findings. It considers the scope of language proficiency and of what is testable in specific purpose language (LSP) tests, and describes the increased recognition in health-care contexts of the importance of effective communication for patient safety and positive clinical outcomes. Studies of the development of language tests for health professionals are cited to highlight the importance of collaboration between domain experts and test designers regarding test content, task format, and rating criteria. There is only limited evidence that LSP tests are better predictors than general purpose language tests of test takers’ ability to perform in a particular context; however, it is similarly uncertain whether general purpose tests are sufficient for such sensitive contexts as those in health care. The following section presents challenges and issues for LSP assessment for health professionals from three theoretical perspectives: authenticity, specificity, and inseparability; it also considers practical and policy constraints. The final section indicates further directions for research and wider ethical issues inherent in the global migration of health professionals.
    • Assessing students with disabilities: voices from the stakeholder community

      Taylor, Lynda; Khalifa, Hanan (Cambridge Scholars Press, 2013)
    • Book review: "Interaction in Paired Oral Proficience Assessment in Spanish by A.M. Ducasse".

      Inoue, Chihiro (Association for Language Testing and Assessment of Australia and New Zealand, 2015)
    • Bricks or mortar: which parts of the input does a second language listener rely on?

      Field, John (TESOL, 2008-09)
      There is considerable evidence from psycholinguistics that first language listeners handle function words differently from content words. This makes intuitive sense because content words require the listener to access a lexical meaning representation whereas function words do not. A separate channel of processing for functors would enable them to be detected faster. The question is of importance to our understanding of second language (L2) listening. Because what is extracted from the input by L2 listeners is generally less than complete, it is useful for the instructor to know which parts of the signal they are likely to recognize, and which parts are likely to be lost to them. On the one hand, L2 listeners might rely heavily on function words because high frequency renders them familiar. On the other, they might have difficulty identifying function words confidently within a piece of connected speech because functors in English are usually brief and of low perceptual prominence. The current study investigated intake by intermediate-level L2 listeners to establish whether function or content words are processed more accurately and reported more frequently. It found that the recognition of functors fell significantly behind that of lexical words. The finding was remarkably robust across first languages and across levels of proficiency, suggesting that it may reflect the way in which L2 listeners choose to distribute their attention.
    • CALL: past, present and future

      Bax, Stephen (Taylor & Francis (Routledge), 2009-05)
    • Can-do statements in reference level descriptions and the socio-cognitive framework for test validation

      Nakatsuhara, Fumiyo; University of Bedfordshire (Japan Foundation, 2013)
    • A case of testing L2 English reading for class level placement

      Green, Anthony; University of Bedfordshire (Palgrave MacMillan, 2011-05)
    • The challenges of second-language writing assessment

      Hamp-Lyons, Liz (Bedford/St. Martins, 2008-04)
    • The co-construction of conversation in group oral tests

      Nakatsuhara, Fumiyo; University of Bedfordshire (Peter Lang, 2013)
    • The cognitive processes underlying the academic reading construct as measured by IELTS

      Weir, Cyril J.; Hawkey, Roger; Green, Anthony; Devi, Sarojani (Cambridge University Press, 2012)
      This study, building on CRELLA’s 2006/07 IELTS funded research, clarifies further the links between what is measured by IELTS and the construct of academic reading as practised by students in a UK university by eliciting from IELTS candidates, by means of a retrospective protocol, the reading processes they engage in when tackling IELTS Reading tasks. The study provides grounded insight into the congruence between the construct measured by IELTS and that of academic reading in the target domain
    • Cognitive processing and foreign language use

      Field, John (Routledge, 2014-12)
    • A cognitive processing approach towards defining reading comprehension

      Weir, Cyril J.; Khalifa, Hanan; University of Bedfordshire (University of Cambridge, 2008-02)
      In this article we focus on a cognitive processing approach as a theoretical basis for evaluating the cognitive validity of reading tests.1 This approach is concerned with the mental processes readers actually use in comprehending texts when engaging in different types of real-life reading. However, we first start by a brief review of other approaches that attempted to establish what reading comprehension really involves.
    • The cognitive processing of candidates during reading tests: evidence from eye-tracking

      Bax, Stephen; University of Bedfordshire (SAGE, 2013-10)
      The research described in this article investigates test takers’ cognitive processing while completing onscreen IELTS (International English Language Testing System) reading test items. The research aims, among other things, to contribute to our ability to evaluate the cognitive validity of reading test items (Glaser, 1991; Field, in press). The project focused on differences in reading behaviours of successful and unsuccessful candidates while completing IELTS test items. A group of Malaysian undergraduates (n = 71) took an onscreen test consisting of two IELTS reading passages with 11 test items. Eye movements of a random sample of these participants (n = 38) were tracked. Stimulated recall interview data was collected to assist in interpretation of the eye-tracking data. Findings demonstrated significant differences between successful and unsuccessful test takers on a number of dimensions, including their ability to read expeditiously (Khalifa & Weir, 2009), and their focus on particular aspects of the test items and texts, while no observable difference was noted in other items. This offers new insights into the cognitive processes of candidates during reading tests. Findings will be of value to examination boards preparing reading tests, to teachers and learners, and also to researchers interested in the cognitive processes of readers.