• Login
    View Item 
    •   Home
    • CRELLA Centre for Research in English Language Learning and Assessment - to April 2016
    • CRELLA Centre for Research in English Language Learning and Assessment
    • View Item
    •   Home
    • CRELLA Centre for Research in English Language Learning and Assessment - to April 2016
    • CRELLA Centre for Research in English Language Learning and Assessment
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of UOBREPCommunitiesTitleAuthorsIssue DateSubmit DateSubjectsPublisherJournalDepartmentThis CollectionTitleAuthorsIssue DateSubmit DateSubjectsPublisherJournalDepartment

    My Account

    LoginRegister

    About

    AboutLearning ResourcesResearch Graduate SchoolResearch InstitutesUniversity Website

    Statistics

    Display statistics

    Linking writing and speaking in English as a Second Language assessment

    • CSV
    • RefMan
    • EndNote
    • BibTex
    • RefWorks
    Authors
    Hamp-Lyons, Liz
    Issue Date
    2012-03
    Subjects
    English as a Second Language
    language testing
    language assessment
    writing
    spoken language
    oral language
    
    Metadata
    Show full item record
    Citation
    Hamp-Lyons, L. (2012) Linking Writing and Speaking in English as a Second Language Assessment in Elliot, N. and Perelman, L. (eds.) Writing Assessment in the 21st Century: Essays in Honor of Edward M. White, Hampton Press, 407-430
    Publisher
    Hampton Press
    URI
    http://hdl.handle.net/10547/238375
    Additional Links
    http://www.hamptonpress.com/Merchant2/merchant.mvc?Screen=PROD&Product_Code=978-1-61289-087-6
    http://www.amazon.co.uk/Writing-Assessment-21st-Century-Essays/dp/1612890873/ref=sr_1_1?ie=UTF8&qid=1340369874&sr=8-1
    Type
    Book chapter
    Language
    en
    ISBN
    1612890873
    9781612890876
    Collections
    CRELLA Centre for Research in English Language Learning and Assessment

    entitlement

     

    Related items

    Showing items related by title, author, creator and subject.

    • Thumbnail

      Validating a set of Japanese EFL proficiency tests: demonstrating locally designed tests meet international standards

      Dunlea, Jamie (University of BedfordshireUniversity of Bedfordshire, 2015-12)
      This study applied the latest developments in language testing validation theory to derive a core body of evidence that can contribute to the validation of a large-scale, high-stakes English as a Foreign Language (EFL) testing program in Japan. The testing program consists of a set of seven level-specific tests targeting different levels of proficiency. This core aspect of the program was selected as the main focus of this study. The socio-cognitive model of language test development and validation provided a coherent framework for the collection, analysis and interpretation of evidence. Three research questions targeted core elements of a validity argument identified in the literature on the socio-cognitive model. RQ 1 investigated the criterial contextual and cognitive features of tasks at different levels of proficiency, Expert judgment and automated analysis tools were used to analyze a large bank of items administered in operational tests across multiple years. RQ 2 addressed empirical item difficulty across the seven levels of proficiency. An innovative approach to vertical scaling was used to place previously administered items from all levels onto a single Rasch-based difficulty scale. RQ 3 used multiple standard-setting methods to investigate whether the seven levels could be meaningfully related to an external proficiency framework. In addition, the study identified three subsidiary goals: firstly, toevaluate the efficacy of applying international standards of best practice to a local context: secondly, to critically evaluate the model of validation; and thirdly, to generate insights directly applicable to operational quality assurance. The study provides evidence across all three research questions to support the claim that the seven levels in the program are distinct. At the same time, the results provide insights into how to strengthen explicit task specification to improve consistency across levels. This study is the largest application of the socio-cognitive model in terms of the amount of operational data analyzed, and thus makes a significant contribution to the ongoing study of validity theory in the context of language testing. While the study demonstrates the efficacy of the socio-cognitive model selected to drive the research design, it also provides recommendations for further refining the model, with implications for the theory and practice of language testing validation.
    • Thumbnail

      Developing a model for investigating the impact of language assessment within educational contexts by a public examination provider

      Saville, N.D. (University of BedfordshireUniversity of Bedfordshire, 2009-01)
      There is no comprehensive model of language test or examination impact and how it might be investigated within educational contexts by a provider of high-stakes examinations, such as an international examinations board. This thesis addresses the development of such a model from the perspective of Cambridge ESOL, a provider of English language tests and examinations in over 100 countries. The starting point for the thesis is a discussion of examinations within educational processes generally and the role that examinations board, such as Cambridge ESOL play within educational systems. The historical context and assessment tradition is an important part of this discussion. In the literature review, the effects and consequences of language tests and examinations are discussed with reference to the better known concept of washback and how impact can be defined as a broader notion operating at both micro and macro levels. This is contextualised within the assessment literature on validity theory and the application of innovation theories within educational systems. Methodologically, the research is based on a meta-analysis which is employed in order to describe and review three impact projects. These three projects were carried out by researchers based in Cambridge to implement an approach to test impact which had emerged during the 1990s as part of the test development and validation procedures adopted by Cambridge ESOL. Based on the analysis, the main outcome and contribution to knowledge is an expanded model of impact designed to provide examination providers with a more effective “theory of action”. When applied within Cambridge ESOL, this model will allow anticipated impacts of the English language examinations to be monitored more effectively and will inform on-going processes of innovation; this will lead to well-motivated improvements in the examinations and the related systems. Wider applications of the model in other assessment contexts are also suggested.
    • Thumbnail

      The impact of computer interface design on Saudi students’ performance on a L2 reading test

      Korevaar, Serge (University of BedfordshireUniversity of Bedfordshire, 2015-01)
      This study investigates the effect of testing mode on lower-level Saudi Arabian test-takers’ performance and cognitive processes when taking an L2 reading test on computer compared to its paper-based counterpart from an interface design perspective. An interface was developed and implemented into the computer-based version of the L2 reading test in this study, which was administered to 102 Saudi Arabian University students for quantitative analyses and to an additional eighteen for qualitative analyses. All participants were assessed on the same L2 reading test in two modes on two separate occasions in a within-subject design. Statistical tests such as correlations, group comparisons, and item analyses were employed to investigate test-mode effect on test-takers’ performance whereas test-takers’ concurrent verbalizations were recorded when taking the reading test to investigate their cognitive processes. Strategies found in both modes were compared through their frequency of occurrence. In addition, a qualitative illustration of test-takers cognitive behavior was given to describe the processes when taking a lower-level L2 reading test. A mixed-method approach was adhered to when collecting data consisting of questionnaires think-aloud protocols, and post-experimental interviews as main data collection instruments. Results on test-takers’ performance showed that there was no significant difference between the two modes of testing on overall reading performance, however, item level analyses discovered significant differences on two of the test’s items. Further qualitative investigation into possible interface design related causes for these differences showed no identifiable relationship between test-takers’ performance and the computer-based testing mode. Results of the cognitive processes analyses showed significant differences in three out of the total number of cognitive processes employed by test-takers indicating that test-takers had more difficulties in processing text in the paper-based test than in the computer-based test. Both product and process analyses carried out further provided convincing supporting evidence for the cognitive validity, content validity, and context validity contributing to the construct validity of the computer-based test used in this study.
    DSpace software (copyright © 2002 - 2021)  DuraSpace
    Quick Guide | Contact Us
    Open Repository is a service operated by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.