• The cognitive processing of candidates during reading tests: evidence from eye-tracking

      Bax, Stephen; University of Bedfordshire (SAGE, 2013-10)
      The research described in this article investigates test takers’ cognitive processing while completing onscreen IELTS (International English Language Testing System) reading test items. The research aims, among other things, to contribute to our ability to evaluate the cognitive validity of reading test items (Glaser, 1991; Field, in press). The project focused on differences in reading behaviours of successful and unsuccessful candidates while completing IELTS test items. A group of Malaysian undergraduates (n = 71) took an onscreen test consisting of two IELTS reading passages with 11 test items. Eye movements of a random sample of these participants (n = 38) were tracked. Stimulated recall interview data was collected to assist in interpretation of the eye-tracking data. Findings demonstrated significant differences between successful and unsuccessful test takers on a number of dimensions, including their ability to read expeditiously (Khalifa & Weir, 2009), and their focus on particular aspects of the test items and texts, while no observable difference was noted in other items. This offers new insights into the cognitive processes of candidates during reading tests. Findings will be of value to examination boards preparing reading tests, to teachers and learners, and also to researchers interested in the cognitive processes of readers.
    • Communicating the theory, practice and principles of language testing to test stakeholders: some reflections

      Taylor, Lynda; University of Bedfordshire (SAGE, 2013-07)
      The 33rd Language Testing Research Colloquium (LTRC), held in June 2011 in Ann Arbor, Michigan, included a conference symposium on the topic of assessment literacy. This event brought together a group of four presenters from different parts of the world, each of whom reported on their recent research in this area. Presentations were followed by a discussant slot that highlighted some thematic threads from across the papers and raised various questions for the professional language testing community to consider together. One point upon which there was general consensus during the discussion was the need for more research to be undertaken and published in this complex and challenging area. It is particularly encouraging, therefore, to see a coherent set of studies on assessment literacy brought together in this special issue of Language Testing and it will undoubtedly make an important contribution to the steadily growing body of literature on this topic, particularly as it concerns the testing of languages. This brief commentary revisits some of the themes originally raised during the LTRC 2011 symposium, considers how these have been explored or developed through the papers in this special issue and reflects on some future directions for our thinking and activity in this important area.
    • A multifaceted approach to investigating pre-task planning effects on paired oral test performance

      Nitta, Ryo; Nakatsuhara, Fumiyo; Nagoya Gakuin University; University of Bedfordshire (SAGE, 2014-01)
      Despite the growing popularity of paired format speaking assessments, the effects of pre-task planning time on performance in these formats are not yet well understood. For example, some studies have revealed the benefits of planning but others have not. Using a multifaceted approach including analysis of the process of speaking performance, the aim of this paper is to investigate the effect of pre-task planning in a paired format. Data were collected from 32 students who carried out two decision-making tasks in pairs, under planned and unplanned conditions. The study used analyses of rating scores, discourse analytic measures, and conversation analysis (CA) of test-taker discourse to gain insight into co-constructing processes. A post-test questionnaire was also administered to understand the participants’ perceptions toward planned and unplanned interactions. The results from rating scores and discourse analytic measures revealed that planning had limited effect on performance, and analysis of the questionnaires did not indicate clear differences between the two conditions. CA, however, identified the possibility of a contrastive mode of discourse under the two planning conditions, raising concerns that planning might actually deprive test-takers of the chance to demonstrate their abilities to interact collaboratively.