Now showing items 1-20 of 145

    • Writing: the re-construction of language

      Davidson, Andrew (Elsevier, 2018-09-13)
      This paper takes as its point of departure David Olson’s contention (as expressed in The Mind on Paper, (2016) CUP, Cambridge) that writing affords a meta-representation of language through allowing linguistic elements to become explicit objects of awareness. In so doing, a tradition of suspicion of writing (e.g. Rousseau and Saussure) that sees it as a detour from and contamination of language is disarmed: writing becomes innocent, becomes naturalised. Also disarmed are some of the concerns given rise to by the observation made in the title of Per Linell’s book of a ‘written language bias in linguistics’ (2005, Routledge, London) with its attendant criticisms of approaches (e.g. Chomsky’s) that assume written language to be transparent to the putative underlying natural object. Taking Chomsky’s position (an unaware scriptism) as a representative point of orientation and target of critique, the paper assembles evidence that problematises the first-order, natural reality of cardinal linguistic constructs: phonemes, words and sentences. It is argued that the facticity of these constructs is artefactual and that that facticity is achieved by way of the introjection of ideal objects which the mind constructs as denotations of elements of an alphabetic writing system: the mental representation of language is transformed by engagement with writing and it is this non-natural artefact to which Structuralist/Generativist linguistics has been answering. Evidence for this position from the psycholinguistic and neurolinguistic literature is presented and discussed. The conclusion arrived at is that the cultural practice of literacy re-configures the cognitive realisation of language. Olson takes writing to be a map of the territory; however, it is suggested that the literate mind re-constructs the territory to answer to the features of the map.
    • Reflecting on the past, embracing the future

      Hamp-Lyons, Liz; University of Bedfordshire (Elsevier, 2019-10-14)
      In the Call for Papers for this anniversary volume of Assessing Writing, the Editors described the goal as “to trace the evolution of ideas, questions, and concerns that are key to our field, to explain their relevance in the present, and to look forward by exploring how these might be addressed in the future” and they asked me to contribute my thoughts. As the Editor of Assessing Writing between 2002 and 2017—a fifteen-year period—I realised from the outset that this was a very ambitious goal, l, one that no single paper could accomplish. Nevertheless, it seemed to me an opportunity to reflect on my own experiences as Editor, and through some of those experiences, offer a small insight into what this journal has done (and not done) to contribute to the debate about the “ideas, questions and concerns”; but also, to suggest some areas that would benefit from more questioning and thinking in the future. Despite the challenges of the task, I am very grateful to current Editors Martin East and David Slomp for the opportunity to reflect on these 25 years and to view them, in part, through the lens provided by the five articles appearing in this anniversary volume.
    • Research and practice in assessing academic English: the case of IELTS

      Taylor, Lynda; Saville, N. (Cambridge University Press, 2019-12-01)
      Test developers need to demonstrate they have premised their measurement tools on a sound theoretical framework which guides their coverage of appropriate language ability constructs in the tests they offer to the public. This is essential for supporting claims about the validity and usefulness of the scores generated by the test.  This volume describes differing approaches to understanding academic reading ability that have emerged in recent decades and goes on to develop an empirically grounded framework for validating tests of academic reading ability.  The framework is then applied to the IELTS Academic reading module to investigate a number of different validity perspectives that reflect the socio-cognitive nature of any assessment event.  The authors demonstrate how a systematic understanding and application of the framework and its components can help test developers to operationalise their tests so as to fulfill the validity requirements for an academic reading test.  The book provides:   An up to date review of the relevant literature on assessing academic reading  A clear and detailed specification of the construct of academic reading  An evaluation of what constitutes an adequate representation of the construct of academic reading for assessment purposes  A consideration of the nature of academic reading in a digital age and its implications for assessment research and test development  The volume is a rich source of information on all aspects of testing academic reading ability.  Examination boards and other institutions who need to validate their own academic reading tests in a systematic and coherent manner, or who wish to develop new instruments for measuring academic reading, will find it of interest, as will researchers and graduate students in the field of language assessment, and those teachers preparing students for IELTS (and similar tests) or involved in English for Academic Purpose programmes. 
    • A comparison of holistic, analytic, and part marking models in speaking assessment

      Khabbazbashi, Nahal; Galaczi, Evelina D. (SAGE, 2020-01-24)
      This mixed methods study examined holistic, analytic, and part marking models (MMs) in terms of their measurement properties and impact on candidate CEFR classifications in a semi-direct online speaking test. Speaking performances of 240 candidates were first marked holistically and by part (phase 1). On the basis of phase 1 findings – which suggested stronger measurement properties for the part MM – phase 2 focused on a comparison of part and analytic MMs. Speaking performances of 400 candidates were rated analytically and by part during that phase. Raters provided open comments on their marking experiences. Results suggested a significant impact of MM; approximately 30% and 50% of candidates in phases 1 and 2 respectively were awarded different (adjacent) CEFR levels depending on the choice of MM used to assign scores. There was a trend of higher CEFR levels with the holistic MM and lower CEFR levels with the part MM. While strong correlations were found between all pairings of MMs, further analyses revealed important differences. The part MM was shown to display superior measurement qualities particularly in allowing raters to make finer distinctions between different speaking ability levels. These findings have implications for the scoring validity of speaking tests.
    • Cognitive validity in language testing: theory and practice

      Field, John; University of Bedfordshire (2012-07-05)
    • Working for washback from university entrance tests in Japan

      Green, Anthony; University of Bedfordshire (2013-07-11)
    • Validating two types of EAP reading-into-writing test tasks

      Chan, Sathena Hiu Chong; University of Bedfordshire (2013-07-11)
    • Validating performance on writing test tasks

      Weir, Cyril J.; University of Bedfordshire (2013-07-11)
    • Computer delivered listening tests: a sad necessity or an opportunity?

      Field, John; University of Bedfordshire (2017-07-06)
    • Validating speaking test rating scales through microanalysis of fluency using PRAAT

      Tavakoli, Parveneh; Nakatsuhara, Fumiyo; Hunter, Ann-Marie; University of Reading; University of Bedfordshire; St. Mary’s University (2017-07-06)
    • Interactional competence in the workplace: challenges and opportunities

      Galaczi, Evelina D.; Taylor, Lynda; Cambridge Assessment English; University of Bedfordshire (2018-11-25)
    • Scaling and scheming: the highs and lows of scoring writing

      Green, Anthony; University of Bedfordshire (2019-12-04)
    • Developing an advanced, specialized English proficiency test for Beijing universities

      Hamp-Lyons, Liz; Wenxia, Bonnie Zhang; University of Bedfordshire; Tsinghua University (2019-07-10)
    • Development of empirically driven checklists for learners’ interactional competence

      Nakatsuhara, Fumiyo; May, Lyn; Lam, Daniel M. K.; Galaczi, Evelina D.; University of Bedfordshire; Queensland University of Technology; Cambridge Assessment English (2019-03-27)