• Login
    View Item 
    •   Home
    • Research from April 2016
    • English language learning and assessment
    • View Item
    •   Home
    • Research from April 2016
    • English language learning and assessment
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of UOBREPCommunitiesTitleAuthorsIssue DateSubmit DateSubjectsPublisherJournalDepartmentThis CollectionTitleAuthorsIssue DateSubmit DateSubjectsPublisherJournalDepartment

    My Account

    LoginRegister

    About

    AboutLearning ResourcesResearch Graduate SchoolResearch InstitutesUniversity Website

    Statistics

    Display statistics

    Establishing test form and individual task comparability: a case study of a semi-direct speaking test

    • CSV
    • RefMan
    • EndNote
    • BibTex
    • RefWorks
    Authors
    Weir, Cyril J.
    Wu, Jessica R.W.
    Affiliation
    University of Luton
    Language Training and Testing Center, Taiwan
    Issue Date
    2006-04-01
    Subjects
    speaking
    language testing
    
    Metadata
    Show full item record
    Abstract
    Examination boards are often criticized for their failure to provide evidence of comparability across forms, and few such studies are publicly available. This study aims to investigate the extent to which three forms of the General English Proficiency Test Intermediate Speaking Test (GEPTS-I) are parallel in terms of two types of validity evidence: parallel-forms reliability and content validity. The three trial test forms, each containing three different task types (read-aloud, answering questions and picture description), were administered to 120 intermediate-level EFL learners in Taiwan. The performance data from the different test forms were analysed using classical procedures and Multi-Faceted Rasch Measurement (MFRM). Various checklists were also employed to compare the tasks in different forms qualitatively in terms of content. The results showed that all three test forms were statistically parallel overall and Forms 2 and 3 could also be considered parallel at the individual task level. Moreover, sources of variation to account for the variable difficulty of tasks in Form 1 were identified by the checklists. Results of the study provide insights for further improvement in parallel-form reliability of the GEPTS-I at the task level and offer a set of methodological procedures for other exam boards to consider. © 2006 Edward Arnold (Publishers) Ltd.
    Citation
    Weir CJ, Wu JRW (2006) 'Establishing test form and individual task comparability: a case study of a semi-direct speaking test', Language Testing, 23 (2), pp.167-197.
    Publisher
    SAGE
    Journal
    Language Testing
    URI
    http://hdl.handle.net/10547/623672
    DOI
    10.1191/0265532206lt326oa
    Additional Links
    https://journals.sagepub.com/doi/10.1191/0265532206lt326oa
    Type
    Article
    Language
    en
    ISSN
    0265-5322
    ae974a485f413a2113503eed53cd6c53
    10.1191/0265532206lt326oa
    Scopus Count
    Collections
    English language learning and assessment

    entitlement

     
    DSpace software (copyright © 2002 - 2021)  DuraSpace
    Quick Guide | Contact Us
    Open Repository is a service operated by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.