• The origins and adaptations of English as a school subject

      Goodwyn, Andrew (Cambridge University Press, 2019-12-31)
      This chapter will consider the particular manifestation of English as a ‘school subject’, principally in the country called England and using some small space for significant international comparisons, and it will mainly focus on the secondary school version. We will call this phenomenon School Subject English (SSE). The chapter will argue that historically SSE has gone through phases of development and adaptation, some aspects of these changes inspired by new theories and concepts and by societal change, some others, especially more recently, entirely reactive to external impositions (for an analysis of the current position of SSE, see Roberts, this volume). This chapter considers SSE to have been ontologically ‘expanded’ between 1870 and (about) 1990, increasing the ambition and scope of the ‘subject’ and the emancipatory ideology of its teachers. This ontological expansion was principally a result of adding ‘models’ of SSE, models that each emphasise different epistemologies of what counts as significant knowledge, and can only exist in a dynamic tension. In relation to this volume, SSE has always incorporated close attention to language but only very briefly (1988–1992) has something akin to Applied Linguistics had any real influence in the secondary classroom. However, with varying emphasis historically, there has been attention (the Adult Needs/Skills model, see later) to the conventions of language, especially ‘secretarial’ issues of spelling and punctuation, some understanding of grammar, and a focus on notions of Standard English, in writing and in speech; but these have never been the driving ideology of SSE. Of the two conceptual giants ‘Language’ and ‘Literature’, it is the latter that has mattered most over those 120 years.
    • Opening the black box: exploring automated speaking evaluation

      Khabbazbashi, Nahal; Xu, Jing; Galaczi, Evelina D. (Springer, 2021-02-10)
      The rapid advances in speech processing and machine learning technologies have attracted language testers’ strong interest in developing automated speaking assessment in which candidate responses are scored by computer algorithms rather than trained human examiners. Despite its increasing popularity, automatic evaluation of spoken language is still shrouded in mystery and technical jargon, often resembling an opaque "black box" that transforms candidate speech to scores in a matter of minutes. Our chapter explicitly problematizes this lack of transparency around test score interpretation and use and asks the following questions: What do automatically derived scores actually mean? What are the speaking constructs underlying them? What are some common problems encountered in automated assessment of speaking? And how can test users evaluate the suitability of automated speaking assessment for their proposed test uses? In addressing these questions, the purpose of our chapter is to explore the benefits, problems, and caveats associated with automated speaking assessment touching on key theoretical discussions on construct representation and score interpretation as well as practical issues such as the infrastructure necessary for capturing high quality audio and the difficulties associated with acquiring training data. We hope to promote assessment literacy by providing the necessary guidance for users to critically engage with automated speaking assessment, pose the right questions to test developers, and ultimately make informed decisions regarding the fitness for purpose of automated assessment solutions for their specific learning and assessment contexts.
    • Comparing writing proficiency assessments used in professional medical registration: a methodology to inform policy and practice

      Chan, Sathena Hiu Chong; Taylor, Lynda; University of Bedfordshire (Elsevier, 2020-10-13)
      Internationally trained doctors wishing to register and practise in an English-speaking country typically have to demonstrate that they can communicate effectively in English, including writing proficiency. Various English language proficiency (ELP) tests are available worldwide and are used for such licensing purposes. This means that medical registration bodies face the question of which test(s) will meet their needs, ideally reflecting the demands of their professional environment. This article reports a mixed-methods study to survey the policy and practice of health-care registration organisations in the UK and worldwide. The study aimed to identify ELP tests that were, or could be, considered as suitable for medical registration purposes and to understand the differences between them. The paper discusses what the study revealed about the function and comparability of different writing tests used in professional registration as well as the complex criteria a professional body may prioritise when selecting a test. Although the original study was completed in 2015, the paper takes account of subsequent changes in policy and practice. It offers a practical methodology and worked example which may be of interest and value to other researchers, language testers and policymakers as they face challenges in selecting and making comparisons across tests.
    • Research and practice in assessing academic reading: the case of IELTS

      Weir, Cyril J.; Chan, Sathena Hiu Chong (Cambridge University Press, 2019-08-29)
      The focus for attention in this volume is the reading component of the IELTS Academic module, which is principally used for admissions purposes into ter- tiary-level institutions throughout the world (see Davies 2008 for a detailed history of the developments in EAP testing leading up to the current IELTS). According to the official website (www.cambridgeenglish.org/exams-and- tests/ielts/test-format/), there are three reading passages in the Academic Reading Module with a total of c.2,150–2,750 words. Individual tasks are not timed. Texts are taken from journals, magazines, books, and newspapers. All the topics are of general interest and the texts have been written for a non-specialist audience. The readings are intended to be about issues that are appropriate to candidates who will enter postgraduate or undergraduate courses. At least one text will contain detailed logical argument. One of the texts may contain non-verbal materials such as graphs, illustrations or diagrams. If there are technical terms, which candidates may not know in the text, then a glossary is provided. The texts and questions become more difficult through the paper. A number of specific critical questions are addressed in applying the socio- cognitive validation framework to the IELTS Academic Reading Module: * Are the cognitive processes required to complete the IELTS Reading test tasks appropriate and adequate in their coverage? (Focus on cognitive validity in Chapter 4.) * Are the contextual characteristics of the test tasks and their administration appropriate and fair to the candidates who are taking them? (Focus on context validity in Chapter 5.) * What effects do the test and test scores have on various stakeholders? (Focus on consequential validity in Chapter 6.) * What external evidence is there that the test is fair? (Focus on criterion- related validity in Chapter 7.)
    • Placing construct definition at the heart of assessment: research, design and a priori validation

      Chan, Sathena Hiu Chong; Latimer, Nicola (Cambridge University Press, 2020-12-31)
      In this chapter, we will first highlight Professor Cyril Weir’s major research into the nature of academic reading. Using one of his test development pro- jects as an example, we will describe how the construct of academic reading was operationalised in the local context of a British university by theoretical construct definition together with empirical analyses of students’ reading patterns on the test through eye-tracking. As we progress through the chapter we reflect on how Weir’s various research projects fed into the development of the test and a new method of analysing eye-tracking data in relation to different types of reading.
    • Repeated test-taking and longitudinal test score analysis: editorial

      Green, Anthony; Van Moere, Alistair; University of Bedfordshire; MetaMetrics Inc. (Sage, 2020-09-27)
    • Applying the socio-cognitive framework: gathering validity evidence during the development of a speaking test

      Nakatsuhara, Fumiyo; Dunlea, Jamie; University of Bedfordshire; British Council (UCLES/Cambridge University Press, 2020-06-18)
      This chapter describes how Weir’s (2005; further elaborated in Taylor (Ed) 2011) socio-cognitive framework for validating speaking tests guided two a priori validation studies of the speaking component of the Test of English for Academic Purposes (TEAP) in Japan. In this chapter, we particularly reflect upon the academic achievements of Professor Cyril J Weir, in terms of: • the effectiveness and value of the socio-cognitive framework underpinning the development of the TEAP Speaking Test while gathering empirical evidence of the construct underlying a speaking test for the target context • his contribution to developing early career researchers and extending language testing expertise in the TEAP development team.
    • Three current, interconnected concerns for writing assessment

      Hamp-Lyons, Liz (Elsevier Ltd, 2014-09-26)
      Editorial
    • The need for EAP teacher knowledge in assessment

      Schmitt, Diane; Hamp-Lyons, Liz; Nottingham Trent University; University of Bedfordshire (Elsevier Ltd, 2015-05-08)
    • What is a John Swales?

      Hamp-Lyons, Liz (Elsevier Ltd, 2015-09-03)
      Editorial
    • Opposing tensions of local and international standards for EAP writing programmes: who are we assessing for?

      Bruce, Emma; Hamp-Lyons, Liz; City University of Hong Kong; University of Bedfordshire (Elsevier Ltd, 2015-04-24)
      In response to recent curriculum changes in secondary schools in Hong Kong including the implementation of the 3-3-4 education structure, with one year less at high school and one year more at university and the introduction of a new school leavers' exam, the Hong Kong Diploma of Secondary Education (HKDSE), universities in the territory have revisited their English language curriculums. At City University a new EAP curriculum and assessment framework was developed to fit the re-defined needs of the new cohort of students.In this paper we describe the development and benchmarking process of a scoring instrument for EAP writing assessment at City University. We discuss the opposing tensions of local (HKDSE) and international (CEFR and IELTS) standards, the problems of aligning EAP needs-based domain scales and standards with the CEFR and the issues associated with attempting to fulfil the institutional expectation that the EAP programme would raise students' scores by a whole CEFR scale step. Finally, we consider the political tensions created by the use of external, even international, reference points for specific levels of writing performance from all our students and suggest the benefits of a specific, locally-designed, fit-for-purpose tool over one aligned with universal standards.
    • The future of JEAP and EAP

      Hamp-Lyons, Liz (Elsevier Ltd, 2015-12-12)
      Editorial
    • Farewell to holistic scoring?

      Hamp-Lyons, Liz (Elsevier, 2016-01-26)
      Editorial
    • Using assessment to promote learning: clarifying constructs, theories, and practices

      Leung, Cyril; Davison, C.; East, M.; Evans, M.; Liu, Y.; Hamp-Lyons, Liz; Purpura, J.E. (Georgetown University Press, 2017-11-22)
    • Why researching EAP practice?

      Hamp-Lyons, Liz (Elsevier Ltd, 2018-01-08)
    • Comparing rating modes: analysing live, audio, and video ratings of IELTS Speaking Test performances

      Nakatsuhara, Fumiyo; Inoue, Chihiro; Taylor, Lynda; (Taylor & Francis, 2020-08-26)
      This mixed methods study compared IELTS examiners’ scores when assessing spoken performances under live and two ‘non-live’ testing conditions using audio and video recordings. Six IELTS examiners assessed 36 test-takers’ performances under the live, audio, and video rating conditions. Scores in the three rating modes were calibrated using the many-facet Rasch model (MFRM). For all three modes, examiners provided written justifications for their ratings, and verbal reports were also collected to gain insights into examiner perceptions towards performance under the audio and video conditions. Results showed that, for all rating criteria, audio ratings were significantly lower than live and video ratings. Examiners noticed more negative performance features under the two non-live rating conditions, compared to the live condition. However, richer information about test-taker performance in the video mode appeared to cause raters to rely less on such negative evidence than audio raters when awarding scores. Verbal report data showed that having visual information in the video-rating mode helped examiners to understand what the test-takers were saying, to comprehend better what test-takers were communicating using non-verbal means, and to understand with greater confidence the source of test-takers’ hesitation, pauses, and awkwardness.
    • Assessment for learning in language education

      Green, Anthony (Urmia University, 2018-10-01)
      This paper describes the growing interest in assessment for learning (AfL) approaches in language education. It explains the term, traces the origins of AfL in developments in general education and considers the evidence for its claimed impact on learning outcomes. The paper sets out some of the challenges involved in researching, implementing and evaluating AfL initiatives in the context of language teaching and learning and considers how this may impact on our field in the future.
    • Writing: the re-construction of language

      Davidson, Andrew (Elsevier, 2018-09-13)
      This paper takes as its point of departure David Olson’s contention (as expressed in The Mind on Paper, (2016) CUP, Cambridge) that writing affords a meta-representation of language through allowing linguistic elements to become explicit objects of awareness. In so doing, a tradition of suspicion of writing (e.g. Rousseau and Saussure) that sees it as a detour from and contamination of language is disarmed: writing becomes innocent, becomes naturalised. Also disarmed are some of the concerns given rise to by the observation made in the title of Per Linell’s book of a ‘written language bias in linguistics’ (2005, Routledge, London) with its attendant criticisms of approaches (e.g. Chomsky’s) that assume written language to be transparent to the putative underlying natural object. Taking Chomsky’s position (an unaware scriptism) as a representative point of orientation and target of critique, the paper assembles evidence that problematises the first-order, natural reality of cardinal linguistic constructs: phonemes, words and sentences. It is argued that the facticity of these constructs is artefactual and that that facticity is achieved by way of the introjection of ideal objects which the mind constructs as denotations of elements of an alphabetic writing system: the mental representation of language is transformed by engagement with writing and it is this non-natural artefact to which Structuralist/Generativist linguistics has been answering. Evidence for this position from the psycholinguistic and neurolinguistic literature is presented and discussed. The conclusion arrived at is that the cultural practice of literacy re-configures the cognitive realisation of language. Olson takes writing to be a map of the territory; however, it is suggested that the literate mind re-constructs the territory to answer to the features of the map.
    • Reflecting on the past, embracing the future

      Hamp-Lyons, Liz; University of Bedfordshire (Elsevier, 2019-10-14)
      In the Call for Papers for this anniversary volume of Assessing Writing, the Editors described the goal as “to trace the evolution of ideas, questions, and concerns that are key to our field, to explain their relevance in the present, and to look forward by exploring how these might be addressed in the future” and they asked me to contribute my thoughts. As the Editor of Assessing Writing between 2002 and 2017—a fifteen-year period—I realised from the outset that this was a very ambitious goal, l, one that no single paper could accomplish. Nevertheless, it seemed to me an opportunity to reflect on my own experiences as Editor, and through some of those experiences, offer a small insight into what this journal has done (and not done) to contribute to the debate about the “ideas, questions and concerns”; but also, to suggest some areas that would benefit from more questioning and thinking in the future. Despite the challenges of the task, I am very grateful to current Editors Martin East and David Slomp for the opportunity to reflect on these 25 years and to view them, in part, through the lens provided by the five articles appearing in this anniversary volume.