Establishing the validity of reading-into-writing test tasks for the UK academic context

3.48
Hdl Handle:
http://hdl.handle.net/10547/312629
Title:
Establishing the validity of reading-into-writing test tasks for the UK academic context
Authors:
Chan, Sathena Hiu Chong
Abstract:
The present study aimed to establish a test development and validation framework of reading-into-writing tests to improve the accountability of using the integrated task type to assess test takers' ability in Academic English. This study applied Weir's (2005) socio-cognitive framework to collect three components of test validity: context validity, cognitive validity and criterion-related validity of two common types of reading-into-writing test tasks (essay task with multiple verbal inputs and essay task with multiple verbal and non-verbal inputs). Through literature review and a series of pilot, a set of contextual and cognitive parameters that are useful to explicitly describe the features of the target academic writing tasks and the cognitive processes required to complete these tasks successfully was defined at the pilot phase of this study. A mixed-method approach was used in the main study to establish the context, cognitive and criterion-related validity of the reading-into-writing test tasks. First of all, for context validity, expert judgement and automated textual analysis were applied to examine the degree of correspondence of the contextual features (overall task setting and input text features) of the reading-into-writing test tasks to those of the target academic writing tasks. For cognitive validity, a cognitive process questionnaire was developed to assist participants to report the processes they employed on the two reading-into-writing test tasks and two real-life academic tasks. A total of 443 questionnaires from 219 participants were collected. The analysis of the cognitive validity included three stands: 1) the cognitive processes involved in real-life academic writing, 2) the extent to which these processes are elicited by the reading-into-writing test tasks, and 3) the underlying structure of the processes elicited by the reading-into-writing test tasks. A range of descriptive, inferential and factor analyses were performed on the questionnaire data. The participants' scores on these real-life academic and reading-into-writing test tasks were collected for correlational analyses to investigate the criterion-related validity of the test tasks. The findings of the study support the context, cognitive and criterion-related validity of the integrated reading-into-writing task type. In terms of context validity, the two reading-into-writing tasks largely resembled the overall task setting, the input text features and the linguistic complexity of the input texts of the real-life tasks in a number of important ways. Regarding cognitive validity, the results revealed 11 cognitive processes involved in 5 phases of real-life academic writing as well as the extent to which these processes were elicited by the test tasks. Both reading-into-writing test tasks were able to elicit from high-achieving and low-achieving participants most of these cognitive processes to a similar extent as the participants employed the processes on the real-life tasks. The medium-achieving participants tended to employ these processes more on the real-life tasks than on the test tasks. The results of explanatory factor analysis showed that both test tasks were largely able to elicit from the participants the same underlying cognitive processes as the real-life tasks did. Lastly, for criterion-related validity, the correlations between the two reading-into-writing test scores and academic performance reported in this study are apparently better than most previously reported figures in the literature. To the best of the researcher's knowledge, this study is the first study to validate two types of reading-into-writing test tasks in terms of three validity components. The results of the study proved with empirical evidence that reading-into-writing tests can successfully operationalise the appropriate contextual features of academic writing tasks and the cognitive processes required in real-life academic writing under test conditions, and the reading-into-writing test scores demonstrated a promising correlation to the target academic performance. The results have important implications for university admissions officers and other stakeholders; in particular they demonstrate that the integrated reading-into-writing task type is a valid option when considering language teaching and testing for academic purposes. The study also puts forward a test framework with explicit contextual and cognitive parameters for language teachers, test developers and future researchers who intend to develop valid reading-into-writing test tasks for assessing academic writing ability and to conduct validity studies in such integrated task type.
Citation:
Chan, S.H.C. (2013) 'Establishing the validity of reading-into-writing test tasks for the UK academic context'. PhD thesis. University of Bedfordshire.
Publisher:
University of Bedfordshire
Issue Date:
Nov-2013
URI:
http://hdl.handle.net/10547/312629
Type:
Thesis or dissertation
Language:
en
Description:
A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of Philosophy
Appears in Collections:
PhD e-theses

Full metadata record

DC FieldValue Language
dc.contributor.authorChan, Sathena Hiu Chongen
dc.date.accessioned2014-02-11T11:32:51Z-
dc.date.available2014-02-11T11:32:51Z-
dc.date.issued2013-11-
dc.identifier.citationChan, S.H.C. (2013) 'Establishing the validity of reading-into-writing test tasks for the UK academic context'. PhD thesis. University of Bedfordshire.en
dc.identifier.urihttp://hdl.handle.net/10547/312629-
dc.descriptionA thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of Philosophyen
dc.description.abstractThe present study aimed to establish a test development and validation framework of reading-into-writing tests to improve the accountability of using the integrated task type to assess test takers' ability in Academic English. This study applied Weir's (2005) socio-cognitive framework to collect three components of test validity: context validity, cognitive validity and criterion-related validity of two common types of reading-into-writing test tasks (essay task with multiple verbal inputs and essay task with multiple verbal and non-verbal inputs). Through literature review and a series of pilot, a set of contextual and cognitive parameters that are useful to explicitly describe the features of the target academic writing tasks and the cognitive processes required to complete these tasks successfully was defined at the pilot phase of this study. A mixed-method approach was used in the main study to establish the context, cognitive and criterion-related validity of the reading-into-writing test tasks. First of all, for context validity, expert judgement and automated textual analysis were applied to examine the degree of correspondence of the contextual features (overall task setting and input text features) of the reading-into-writing test tasks to those of the target academic writing tasks. For cognitive validity, a cognitive process questionnaire was developed to assist participants to report the processes they employed on the two reading-into-writing test tasks and two real-life academic tasks. A total of 443 questionnaires from 219 participants were collected. The analysis of the cognitive validity included three stands: 1) the cognitive processes involved in real-life academic writing, 2) the extent to which these processes are elicited by the reading-into-writing test tasks, and 3) the underlying structure of the processes elicited by the reading-into-writing test tasks. A range of descriptive, inferential and factor analyses were performed on the questionnaire data. The participants' scores on these real-life academic and reading-into-writing test tasks were collected for correlational analyses to investigate the criterion-related validity of the test tasks. The findings of the study support the context, cognitive and criterion-related validity of the integrated reading-into-writing task type. In terms of context validity, the two reading-into-writing tasks largely resembled the overall task setting, the input text features and the linguistic complexity of the input texts of the real-life tasks in a number of important ways. Regarding cognitive validity, the results revealed 11 cognitive processes involved in 5 phases of real-life academic writing as well as the extent to which these processes were elicited by the test tasks. Both reading-into-writing test tasks were able to elicit from high-achieving and low-achieving participants most of these cognitive processes to a similar extent as the participants employed the processes on the real-life tasks. The medium-achieving participants tended to employ these processes more on the real-life tasks than on the test tasks. The results of explanatory factor analysis showed that both test tasks were largely able to elicit from the participants the same underlying cognitive processes as the real-life tasks did. Lastly, for criterion-related validity, the correlations between the two reading-into-writing test scores and academic performance reported in this study are apparently better than most previously reported figures in the literature. To the best of the researcher's knowledge, this study is the first study to validate two types of reading-into-writing test tasks in terms of three validity components. The results of the study proved with empirical evidence that reading-into-writing tests can successfully operationalise the appropriate contextual features of academic writing tasks and the cognitive processes required in real-life academic writing under test conditions, and the reading-into-writing test scores demonstrated a promising correlation to the target academic performance. The results have important implications for university admissions officers and other stakeholders; in particular they demonstrate that the integrated reading-into-writing task type is a valid option when considering language teaching and testing for academic purposes. The study also puts forward a test framework with explicit contextual and cognitive parameters for language teachers, test developers and future researchers who intend to develop valid reading-into-writing test tasks for assessing academic writing ability and to conduct validity studies in such integrated task type.en
dc.language.isoenen
dc.publisherUniversity of Bedfordshireen
dc.subjectX162 Teaching English as a Foreign Language (TEFL)en
dc.subjectacademic writingen
dc.subjectacademic readingen
dc.subjectreading testsen
dc.subjectlanguage testingen
dc.subjectlanguage assessmenten
dc.titleEstablishing the validity of reading-into-writing test tasks for the UK academic contexten
dc.typeThesis or dissertationen
dc.type.qualificationnamePhDen_GB
dc.type.qualificationlevelPhDen
dc.publisher.institutionUniversity of Bedfordshireen
This item is licensed under a Creative Commons License
Creative Commons
All Items in UOBREP are protected by copyright, with all rights reserved, unless otherwise indicated.