• The cognitive validity of reading and writing tests designed for young learners

      Field, John (Cambridge University Press, 2018-06-01)
      The notion of cognitive validity becomes considerably more complicated when one extends it to  tests designed for Young Learners. It then becomes necessary to take full account of the level of cognitive development of the target population (their ability to handle certain mental operations and not others). It may also be necessary to include some consideration of their level of linguistic development in L1: in particular, the degree of proficiency they may have achieved in reading and writing. This chapter examines the extent to which awareness of the cognitive development of young learners up to the age of 12 should and does influence the decisions made by those designing tests of second language reading  and writing. The limitations and strengths of young learners of this age range are matched against the various processing demands entailed in second language reading and writing and are then related to characteristics of the Young Learners tests offered by the Cambridge English examinations.
    • Developing rubrics to assess the reading-into-writing skills: a case study

      Chan, Sathena Hiu Chong; Inoue, Chihiro; Taylor, Lynda; University of Bedfordshire (Elsevier Ltd, 2015-08-08)
      The integrated assessment of language skills, particularly reading-into-writing, is experiencing a renaissance. The use of rating rubrics, with verbal descriptors that describe quality of L2 writing performance, in large scale assessment is well-established. However, less attention has been directed towards the development of reading-into-writing rubrics. The task of identifying and evaluating the contribution of reading ability to the writing process and product so that it can be reflected in a set of rating criteria is not straightforward. This paper reports on a recent project to define the construct of reading-into-writing ability for designing a suite of integrated tasks at four proficiency levels, ranging from CEFR A2 to C1. The authors discuss how the processes of theoretical construct definition, together with empirical analyses of test taker performance, were used to underpin the development of rating rubrics for the reading-into-writing tests. Methodologies utilised in the project included questionnaire, expert panel judgement, group interview, automated textual analysis and analysis of rater reliability. Based on the results of three pilot studies, the effectiveness of the rating rubrics is discussed. The findings can inform decisions about how best to account for both the reading and writing dimensions of test taker performance in the rubrics descriptors.
    • Paper-based vs computer-based writing assessment: divergent, equivalent or complementary?

      Chan, Sathena Hiu Chong (Elsevier, 2018-05-16)
      Writing on a computer is now commonplace in most post-secondary educational contexts and workplaces, making research into computer-based writing assessment essential. This special issue of Assessing Writing includes a range of articles focusing on computer-based writing assessments. Some of these have been designed to parallel an existing paper-based assessment, others have been constructed as computer-based from the beginning. The selection of papers addresses various dimensions of the validity of computer-based writing assessment use in different contexts and across levels of L2 learner proficiency. First, three articles deal with the impact of these two delivery modes, paper-baser-based or computer-based, on test takers’ processing and performance in large-scale high-stakes writing tests; next, two articles explore the use of online writing assessment in higher education; the final two articles evaluate the use of technologies to provide feedback to support learning.
    • Reflecting on the past, embracing the future

      Hamp-Lyons, Liz; University of Bedfordshire (Elsevier, 2019-10-14)
      In the Call for Papers for this anniversary volume of Assessing Writing, the Editors described the goal as “to trace the evolution of ideas, questions, and concerns that are key to our field, to explain their relevance in the present, and to look forward by exploring how these might be addressed in the future” and they asked me to contribute my thoughts. As the Editor of Assessing Writing between 2002 and 2017—a fifteen-year period—I realised from the outset that this was a very ambitious goal, l, one that no single paper could accomplish. Nevertheless, it seemed to me an opportunity to reflect on my own experiences as Editor, and through some of those experiences, offer a small insight into what this journal has done (and not done) to contribute to the debate about the “ideas, questions and concerns”; but also, to suggest some areas that would benefit from more questioning and thinking in the future. Despite the challenges of the task, I am very grateful to current Editors Martin East and David Slomp for the opportunity to reflect on these 25 years and to view them, in part, through the lens provided by the five articles appearing in this anniversary volume.
    • Researching metadiscourse markers in candidates’ writing at Cambridge FCE, CAE and CPE levels

      Bax, Stephen; Waller, Daniel; Nakatsuhara, Fumiyo; University of Bedfordshire; University of Central Lancashire (2013-09-07)
      This paper reports on research funded through the Cambridge ESOL Funded Research Programme, Round Three, 2012.
    • Researching participants taking IELTS Academic Writing Task 2 (AWT2) in paper mode and in computer mode in terms of score equivalence, cognitive validity and other factors

      Chan, Sathena Hiu Chong; Bax, Stephen; Weir, Cyril J. (British Council and IDP: IELTS Australia, 2017-08-01)
      Computer-based (CB) assessment is becoming more common in most university disciplines, and international language testing bodies now routinely use computers for many areas of English language assessment. Given that, in the near future, IELTS also will need to move towards offering CB options alongside traditional paper-based (PB) modes, the research reported here prepares for that possibility, building on research carried out some years ago which investigated the statistical comparability of the IELTS writing test between the two delivery modes, and offering a fresh look at the relevant issues. By means of questionnaire and interviews, the current study investigates the extent to which 153 test-takers’ cognitive processes, while completing IELTS Academic Writing in PB mode and in CB mode, compare with the real-world cognitive processes of students completing academic writing at university. A major contribution of our study is its use – for the first time in the academic literature – of data from research into cognitive processes within real-world academic settings as a comparison with cognitive processing during academic writing under test conditions. The most important conclusion from the study is that according to the 5-facet MFRM analysis, there were no significant differences in the scores awarded by two independent raters for candidates’ performances on the tests taken under two conditions, one paper-and-pencil and the other computer. Regarding analytic scores criteria, the differences in three areas (i.e. Task Achievement, Coherence and Cohesion, and Grammatical Range and Accuracy) were not significant, but the difference reported in Lexical Resources was significant, if slight. In summary, the difference of scores between the two modes is at an acceptable level. With respect to the cognitive processes students employ in performing under the two conditions of the test, results of the Cognitive Process Questionnaire (CPQ) survey indicate a similar pattern between the cognitive processes involved in writing on a computer and writing with paper-and-pencil. There were no noticeable major differences in the general tendency of the mean of each questionnaire item reported on the two test modes. In summary, the cognitive processes were employed in a similar fashion under the two delivery conditions. Based on the interview data (n=30), it appears that the participants reported using most of the processes in a similar way between the two modes. Nevertheless, a few potential differences indicated by the interview data might be worth further investigation in future studies. The Computer Familiarity Questionnaire survey shows that these students in general are familiar with computer usage and their overall reactions towards working with a computer are positive. Multiple regression analysis, used to find out if computer familiarity had any effect on students’ performances on the two modes, suggested that test-takers who do not have a suitable familiarity profile might perform slightly worse than those who do, in computer mode. In summary, the research offered in this report offers a unique comparison with realworld academic writing, and presents a significant contribution to the research base which IELTS and comparable international testing bodies will need to consider, if they are to introduce CB test versions in future.
    • Researching the comparability of paper-based and computer-based delivery in a high-stakes writing test

      Chan, Sathena Hiu Chong; Bax, Stephen; Weir, Cyril J. (Elsevier, 2018-04-07)
      International language testing bodies are now moving rapidly towards using computers for many areas of English language assessment, despite the fact that research on comparability with paper-based assessment is still relatively limited in key areas. This study contributes to the debate by researching the comparability of a highstakes EAP writing test (IELTS) in two delivery modes, paper-based (PB) and computer-based (CB). The study investigated 153 test takers' performances and their cognitive processes on IELTS Academic Writing Task 2 in the two modes, and the possible effect of computer familiarity on their test scores. Many-Facet Rasch Measurement (MFRM) was used to examine the difference in test takers' scores between the two modes, in relation to their overall and analytic scores. By means of questionnaires and interviews, we investigated the cognitive processes students employed under the two conditions of the test. A major contribution of our study is its use - for the first time in the computer-based writing assessment literature - of data from research into cognitive processes within realworld academic settings as a comparison with cognitive processing during academic writing under test conditions. In summary, this study offers important new insights into academic writing assessment in computer mode.
    • Writing: the re-construction of language

      Davidson, Andrew (Elsevier, 2018-09-13)
      This paper takes as its point of departure David Olson’s contention (as expressed in The Mind on Paper, (2016) CUP, Cambridge) that writing affords a meta-representation of language through allowing linguistic elements to become explicit objects of awareness. In so doing, a tradition of suspicion of writing (e.g. Rousseau and Saussure) that sees it as a detour from and contamination of language is disarmed: writing becomes innocent, becomes naturalised. Also disarmed are some of the concerns given rise to by the observation made in the title of Per Linell’s book of a ‘written language bias in linguistics’ (2005, Routledge, London) with its attendant criticisms of approaches (e.g. Chomsky’s) that assume written language to be transparent to the putative underlying natural object. Taking Chomsky’s position (an unaware scriptism) as a representative point of orientation and target of critique, the paper assembles evidence that problematises the first-order, natural reality of cardinal linguistic constructs: phonemes, words and sentences. It is argued that the facticity of these constructs is artefactual and that that facticity is achieved by way of the introjection of ideal objects which the mind constructs as denotations of elements of an alphabetic writing system: the mental representation of language is transformed by engagement with writing and it is this non-natural artefact to which Structuralist/Generativist linguistics has been answering. Evidence for this position from the psycholinguistic and neurolinguistic literature is presented and discussed. The conclusion arrived at is that the cultural practice of literacy re-configures the cognitive realisation of language. Olson takes writing to be a map of the territory; however, it is suggested that the literate mind re-constructs the territory to answer to the features of the map.