• Developing rubrics to assess the reading-into-writing skills: a case study

      Chan, Sathena Hiu Chong; Inoue, Chihiro; Taylor, Lynda; University of Bedfordshire (Elsevier Ltd, 2015-08-08)
      The integrated assessment of language skills, particularly reading-into-writing, is experiencing a renaissance. The use of rating rubrics, with verbal descriptors that describe quality of L2 writing performance, in large scale assessment is well-established. However, less attention has been directed towards the development of reading-into-writing rubrics. The task of identifying and evaluating the contribution of reading ability to the writing process and product so that it can be reflected in a set of rating criteria is not straightforward. This paper reports on a recent project to define the construct of reading-into-writing ability for designing a suite of integrated tasks at four proficiency levels, ranging from CEFR A2 to C1. The authors discuss how the processes of theoretical construct definition, together with empirical analyses of test taker performance, were used to underpin the development of rating rubrics for the reading-into-writing tests. Methodologies utilised in the project included questionnaire, expert panel judgement, group interview, automated textual analysis and analysis of rater reliability. Based on the results of three pilot studies, the effectiveness of the rating rubrics is discussed. The findings can inform decisions about how best to account for both the reading and writing dimensions of test taker performance in the rubrics descriptors.
    • Researching L2 writers’ use of metadiscourse markers at intermediate and advanced levels

      Bax, Stephen; Nakatsuhara, Fumiyo; Waller, Daniel; University of Bedfordshire; University of Central Lancashire (Elsevier, 2019-02-20)
      Metadiscourse markers refer to aspects of text organisation or indicate a writer’s stance towards the text’s content or towards the reader (Hyland, 2004:109). The CEFR (Council of Europe, 2001) indicates that one of the key areas of development anticipated between levels B2 and C1 is an increasing variety of discourse markers and growing acknowledgement of the intended audience by learners. This study represents the first large-scale project of the metadiscourse of general second language learner writing, through the analysis of 281 metadiscourse markers in 13 categories, from 900 exam scripts at CEFR B2-C2 levels. The study employed the online text analysis tool Text Inspector (Bax, 2012), in conjunction with human analysts. The findings revealed that higher level writers used fewer metadiscourse markers than lower level writers, but used a significantly wider range of 8 of the 13 classes of markers. The study also demonstrated the crucial importance of analysing not only the behaviour of whole classes of metadiscourse items but also the individual items themselves. The findings are of potential interest to those involved in the development of assessment scales at different levels of the CEFR, or to teachers interested in aiding the development of learners. 
    • Researching participants taking IELTS Academic Writing Task 2 (AWT2) in paper mode and in computer mode in terms of score equivalence, cognitive validity and other factors

      Chan, Sathena Hiu Chong; Bax, Stephen; Weir, Cyril J. (British Council and IDP: IELTS Australia, 2017-08-01)
      Computer-based (CB) assessment is becoming more common in most university disciplines, and international language testing bodies now routinely use computers for many areas of English language assessment. Given that, in the near future, IELTS also will need to move towards offering CB options alongside traditional paper-based (PB) modes, the research reported here prepares for that possibility, building on research carried out some years ago which investigated the statistical comparability of the IELTS writing test between the two delivery modes, and offering a fresh look at the relevant issues. By means of questionnaire and interviews, the current study investigates the extent to which 153 test-takers’ cognitive processes, while completing IELTS Academic Writing in PB mode and in CB mode, compare with the real-world cognitive processes of students completing academic writing at university. A major contribution of our study is its use – for the first time in the academic literature – of data from research into cognitive processes within real-world academic settings as a comparison with cognitive processing during academic writing under test conditions. The most important conclusion from the study is that according to the 5-facet MFRM analysis, there were no significant differences in the scores awarded by two independent raters for candidates’ performances on the tests taken under two conditions, one paper-and-pencil and the other computer. Regarding analytic scores criteria, the differences in three areas (i.e. Task Achievement, Coherence and Cohesion, and Grammatical Range and Accuracy) were not significant, but the difference reported in Lexical Resources was significant, if slight. In summary, the difference of scores between the two modes is at an acceptable level. With respect to the cognitive processes students employ in performing under the two conditions of the test, results of the Cognitive Process Questionnaire (CPQ) survey indicate a similar pattern between the cognitive processes involved in writing on a computer and writing with paper-and-pencil. There were no noticeable major differences in the general tendency of the mean of each questionnaire item reported on the two test modes. In summary, the cognitive processes were employed in a similar fashion under the two delivery conditions. Based on the interview data (n=30), it appears that the participants reported using most of the processes in a similar way between the two modes. Nevertheless, a few potential differences indicated by the interview data might be worth further investigation in future studies. The Computer Familiarity Questionnaire survey shows that these students in general are familiar with computer usage and their overall reactions towards working with a computer are positive. Multiple regression analysis, used to find out if computer familiarity had any effect on students’ performances on the two modes, suggested that test-takers who do not have a suitable familiarity profile might perform slightly worse than those who do, in computer mode. In summary, the research offered in this report offers a unique comparison with realworld academic writing, and presents a significant contribution to the research base which IELTS and comparable international testing bodies will need to consider, if they are to introduce CB test versions in future.
    • Using keystroke logging to understand writers’ processes on a reading-into-writing test

      Chan, Sathena Hiu Chong (Springer Open, 2017-05-19)
      Background Integrated reading-into-writing tasks are increasingly used in large-scale language proficiency tests. Such tasks are said to possess higher authenticity as they reflect real-life writing conditions better than independent, writing-only tasks. However, to effectively define the reading-into-writing construct, more empirical evidence regarding how writers compose from sources both in real-life and under test conditions is urgently needed. Most previous process studies used think aloud or questionnaire to collect evidence. These methods rely on participants’ perceptions of their processes, as well as their ability to report them. Findings This paper reports on a small-scale experimental study to explore writers’ processes on a reading-into-writing test by employing keystroke logging. Two L2 postgraduates completed an argumentative essay on computer. Their text production processes were captured by a keystroke logging programme. Students were also interviewed to provide additional information. Keystroke logging like most computing tools provides a range of measures. The study examined the students’ reading-into-writing processes by analysing a selection of the keystroke logging measures in conjunction with students’ final texts and interview protocols. Conclusions The results suggest that the nature of the writers’ reading-into-writing processes might have a major influence on the writer’s final performance. Recommendations for future process studies are provided.