• Accommodation in language testing

      Taylor, Lynda; University of Bedfordshire (Cambridge University Press, 2012-01)
    • Assessing health professionals

      Taylor, Lynda; Pill, John; University of Bedfordshire; University of Melbourne (Wiley, 2013-11)
      Language tests are used in the evaluation of migrating health professionals’ readiness to practise safely and effectively. Such assessment is complex, involving policy and practice alongside questions of a moral and ethical nature. The chapter focuses on English language assessment of doctors—referred to as international medical graduates (IMGs)—to exemplify issues arising for all health professionals in any language. The initial section describes differing approaches to language assessment used in various jurisdictions internationally: the UK, Australia, and the USA. The next section links this assessment policy and practice to theoretical insights and research findings. It considers the scope of language proficiency and of what is testable in specific purpose language (LSP) tests, and describes the increased recognition in health-care contexts of the importance of effective communication for patient safety and positive clinical outcomes. Studies of the development of language tests for health professionals are cited to highlight the importance of collaboration between domain experts and test designers regarding test content, task format, and rating criteria. There is only limited evidence that LSP tests are better predictors than general purpose language tests of test takers’ ability to perform in a particular context; however, it is similarly uncertain whether general purpose tests are sufficient for such sensitive contexts as those in health care. The following section presents challenges and issues for LSP assessment for health professionals from three theoretical perspectives: authenticity, specificity, and inseparability; it also considers practical and policy constraints. The final section indicates further directions for research and wider ethical issues inherent in the global migration of health professionals.
    • Can-do statements in reference level descriptions and the socio-cognitive framework for test validation

      Nakatsuhara, Fumiyo; University of Bedfordshire (Japan Foundation, 2013)
    • A case of testing L2 English reading for class level placement

      Green, Anthony; University of Bedfordshire (Palgrave MacMillan, 2011-05)
    • The co-construction of conversation in group oral tests

      Nakatsuhara, Fumiyo; University of Bedfordshire (Peter Lang, 2013)
    • A cognitive processing approach towards defining reading comprehension

      Weir, Cyril J.; Khalifa, Hanan; University of Bedfordshire (University of Cambridge, 2008-02)
      In this article we focus on a cognitive processing approach as a theoretical basis for evaluating the cognitive validity of reading tests.1 This approach is concerned with the mental processes readers actually use in comprehending texts when engaging in different types of real-life reading. However, we first start by a brief review of other approaches that attempted to establish what reading comprehension really involves.
    • The cognitive processing of candidates during reading tests: evidence from eye-tracking

      Bax, Stephen; University of Bedfordshire (SAGE, 2013-10)
      The research described in this article investigates test takers’ cognitive processing while completing onscreen IELTS (International English Language Testing System) reading test items. The research aims, among other things, to contribute to our ability to evaluate the cognitive validity of reading test items (Glaser, 1991; Field, in press). The project focused on differences in reading behaviours of successful and unsuccessful candidates while completing IELTS test items. A group of Malaysian undergraduates (n = 71) took an onscreen test consisting of two IELTS reading passages with 11 test items. Eye movements of a random sample of these participants (n = 38) were tracked. Stimulated recall interview data was collected to assist in interpretation of the eye-tracking data. Findings demonstrated significant differences between successful and unsuccessful test takers on a number of dimensions, including their ability to read expeditiously (Khalifa & Weir, 2009), and their focus on particular aspects of the test items and texts, while no observable difference was noted in other items. This offers new insights into the cognitive processes of candidates during reading tests. Findings will be of value to examination boards preparing reading tests, to teachers and learners, and also to researchers interested in the cognitive processes of readers.
    • Cognitive validity

      Field, John; University of Bedfordshire (Cambridge University Press, 2011)
    • The cognitive validity of the lecture-based question in the IELTS listening paper

      Field, John; University of Bedfordshire (Cambridge University Press, 2012)
    • Communicating the theory, practice and principles of language testing to test stakeholders: some reflections

      Taylor, Lynda; University of Bedfordshire (SAGE, 2013-07)
      The 33rd Language Testing Research Colloquium (LTRC), held in June 2011 in Ann Arbor, Michigan, included a conference symposium on the topic of assessment literacy. This event brought together a group of four presenters from different parts of the world, each of whom reported on their recent research in this area. Presentations were followed by a discussant slot that highlighted some thematic threads from across the papers and raised various questions for the professional language testing community to consider together. One point upon which there was general consensus during the discussion was the need for more research to be undertaken and published in this complex and challenging area. It is particularly encouraging, therefore, to see a coherent set of studies on assessment literacy brought together in this special issue of Language Testing and it will undoubtedly make an important contribution to the steadily growing body of literature on this topic, particularly as it concerns the testing of languages. This brief commentary revisits some of the themes originally raised during the LTRC 2011 symposium, considers how these have been explored or developed through the papers in this special issue and reflects on some future directions for our thinking and activity in this important area.
    • Compliments and refusals in Poland and England

      Bhatti, Joanna; Žegarac, Vladimir; University of Bedfordshire (De Gruyter, 2012-01-01)
      There are significant cross-cultural differences in the way compliments and refusals are made and responded to. The investigation of these speech acts touches on some interesting issues for pragmatic theory: the relation between the universal and the culturespecific features of complimenting and refusing, the importance of culture specific strategies in explaining how these speech acts are produced and responded to, as well as the relation between the message conveyed by a compliment or refusal and its affective/emotional effects on the hearer. The pilot study presented in this paper investigates the production and reception of compliments and refusals in the relatively proximate cultures of England and Poland. The findings reveal significant systematic cross-cultural differences relating to refusals, while the differences relating to compliments are fewer and more subtle. The data suggests that the cross-cultural similarities and differences observed can be explained in terms of (a) a universalist view of institutional speech acts and face concerns in rapport management, (b) the Relevancetheoretic view of communication and cognition as oriented towards maximising informativeness and (c) some culture-specific values. These tentative conclusions are based on very limited data and indicate useful directions for future research.
    • Conclusions and recommendations

      Weir, Cyril J.; University of Bedfordshire (Cambridge University Press, 2013)
    • Course handbook for promoting sustainable excellence in English language testing and assessment

      Green, Anthony; Westbrook, C.; Burenina, N.; University of Bedfordshire (Cambridge University Press, 2014)
    • Culture and communication

      Zegarac, Vladimir; University of Bedfordshire (Continuum International Publishing Group, 2008)
    • Developing assessment literacy

      Taylor, Lynda; University of Bedfordshire (Cambridge University Press, 2009-06)
      Language testing and assessment have moved center stage in recent years, whether for educational, employment, or sociopolitical reasons. More and more people are involved in developing tests and using test score outcomes, though often without a background or training in assessment to equip them adequately for this role. Simultaneously, increasing professionalization of the field has led to the generation of standards, ethical codes, and guidelines for good testing practice. Although these can help make assessment practices more transparent and accessible to a wider constituency, they also risk promoting a view of language testing as highly technical and specialized–best left to experts. These trends have implications for both policy and practice. This article reviews efforts to promote understanding of assessment within the field of applied linguistics and within education and society more broadly. The role of professional associations, academic institutions, and commercial organizations in developing assessment literacy is considered, as well as the contribution of published material and other types of training resources. This article reflects on how the international language testing community can encourage the sharing of the core knowledge, skills, and understanding that underpin good quality assessment as widely and accessibly for the benefit of all.
    • Developing assessment scales for large-scale speaking tests: a multiple-method approach

      Galaczi, Evelina D.; ffrench, Angela; Hubbard, Chris; Green, Anthony; University of Cambridge; University of Bedfordshire (Taylor and Francis, 2011-08)
      The process of constructing assessment scales for performance testing is complex and multi-dimensional. As a result, a number of different approaches, both empirically and intuitively based, are open to developers. In this paper we outline the approach taken in the revision of a set of assessment scales used with speaking tests, and present the value of combining methodologies to inform and refine scale development. We set the process in the context of the growing influence of the Common European Framework of Reference (Council of Europe 2001) and outline a number of stages in terms of the procedures followed and outcomes produced. The findings describe a range of data that was collected and analysed through a number of phases and used to inform the revision of the scales, including consultation with experts, and data-driven qualitative and quantitative research studies. The overall aim of the paper is to illustrate the importance of combining intuitive and data-driven scale construction methodologies, and to suggest a usable scale construction model for application or adaptation in a variety of contexts.
    • Digital education: beyond the 'wow' factor

      Bax, Stephen; University of Bedfordshire (Palgrave MacMillan, 2011-02)
    • Discourse and genre: using language in context

      Bax, Stephen; University of Bedfordshire (Palgrave MacMillan, 2011-01)
    • An empirical investigation of the process of writing Academic Reading test items for the international English Language Testing System

      Hawkey, Roger; Green, Anthony; University of Bedfordshire (Cambridge University Press, 2012)
      The paper describes a study of reading test text selection, item writing and editing processes, with particular reference to these areas of test production for the IELTS Academic Reading test. Based on retrospective reports and direct observation, the report compares how trained and untrained item writers select and edit reading texts to make them suitable for a task-based test of reading and how they generate the accompanying items. Both individual and collective test editing processes are investigated.