Developing computer-based assessment as a tool to support enquiry led learning
Authors
Collins, Carol AnnIssue Date
2008-03Subjects
X300 Academic studies in Educationassessment
computer-based assessment
computer-based testing
computer-assisted assessment
improving formative assessment
Kilauea exemplar
Metadata
Show full item recordAbstract
This research explores the possibility of developing Computer-based Assessment (CBA) as a tool to support enquiry-led learning. In this approach learners explore and unpack thoughts and ideas that help them to learn and solve problems. A critical feature of this is feedback and this research focussed on how to design and supply feedback in CBA. Two lines of research were sourced: Computer-assisted Assessment (CM) and Improving Formative Assessment (IFA). Specifically, performance data was collected, analysed and evaluated from the statistical results of 3 CSA tests (approximately 100 undergraduates per test) and from qualitative feedback, the dialogic question and answer responses of (approximately 30 learners x 100 responses) engaged on level 3 activity of the National Qualifications Framework (NQF). The outcome of the research is the development of Kilauea exemplar, a theoretical model of an enquiry led item type applied in a subject specific domain.Citation
Collins, C.A. (2008) 'Developing computer-based assessment as a tool to support enquiry led learning'. MSc by Research thesis. University of Bedfordshire.Publisher
University of BedfordshireType
Thesis or dissertationLanguage
enDescription
A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Master of Science (MSc) by ResearchCollections
The following license files are associated with this item:
Related items
Showing items related by title, author, creator and subject.
-
Assessing English: a trial collaborative standardised marking projectGibbons, Simon; Marshall, Bethan; King's College London (University of Waikato, 2010-12)Recent policy developments in England have, to some extent, relaxed the hold of external, high-stakes assessment on teachers of students in the early years of secondary education. In such a context, there is the opportunity for teachers to reassert the importance of teacher assessment as the most reliable means of judging a student’s abilities. A recent project jointly undertaken by the National Association for the Teaching of English (NATE) and the Centre for Evaluation and Monitoring (CEM) was one attempt to trial a model for the collaborative standardised assessment of students’ writing. This article puts this project in the context of previous assessment initiatives in English and suggests that, given recent policy developments, now may be precisely the time for the profession to seek to be proactive in setting the assessment agenda.
-
Researching the comparability of paper-based and computer-based delivery in a high-stakes writing testChan, Sathena Hiu Chong; Bax, Stephen; Weir, Cyril J. (Elsevier, 2018-04-07)International language testing bodies are now moving rapidly towards using computers for many areas of English language assessment, despite the fact that research on comparability with paper-based assessment is still relatively limited in key areas. This study contributes to the debate by researching the comparability of a highstakes EAP writing test (IELTS) in two delivery modes, paper-based (PB) and computer-based (CB). The study investigated 153 test takers' performances and their cognitive processes on IELTS Academic Writing Task 2 in the two modes, and the possible effect of computer familiarity on their test scores. Many-Facet Rasch Measurement (MFRM) was used to examine the difference in test takers' scores between the two modes, in relation to their overall and analytic scores. By means of questionnaires and interviews, we investigated the cognitive processes students employed under the two conditions of the test. A major contribution of our study is its use - for the first time in the computer-based writing assessment literature - of data from research into cognitive processes within realworld academic settings as a comparison with cognitive processing during academic writing under test conditions. In summary, this study offers important new insights into academic writing assessment in computer mode.