• Login
    View Item 
    •   Home
    • CRELLA Centre for Research in English Language Learning and Assessment - to April 2016
    • CRELLA Centre for Research in English Language Learning and Assessment
    • View Item
    •   Home
    • CRELLA Centre for Research in English Language Learning and Assessment - to April 2016
    • CRELLA Centre for Research in English Language Learning and Assessment
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of UOBREPCommunitiesTitleAuthorsIssue DateSubmit DateSubjectsPublisherJournalDepartmentThis CollectionTitleAuthorsIssue DateSubmit DateSubjectsPublisherJournalDepartment

    My Account

    LoginRegister

    About

    AboutLearning ResourcesResearch Graduate SchoolResearch InstitutesUniversity Website

    Statistics

    Display statistics

    Language functions revisited: theoretical and empirical bases for language construct definition across the ability range

    • CSV
    • RefMan
    • EndNote
    • BibTex
    • RefWorks
    Authors
    Green, Anthony
    Affiliation
    University of Bedfordshire
    Issue Date
    2012
    Subjects
    can do statements
    English Profile Programme
    Common European Framework of Reference for Languages
    language functions
    language assessment
    
    Metadata
    Show full item record
    Abstract
    This book introduces the theoretical and empirical bases for the definition of language learning level in functional 'Can Do' terms for the English Profile Programme, setting out the ambitions of the Programme and presenting emerging findings. The English Profile Programme is an elaboration of the performance level descriptions of the Common European Framework of Reference for Languages (CEFR) that is concerned specifically with the English language. The CEFR has become influential in building a shared understanding of performance levels for foreign language learners. However, there is a considerable gap between the broad descriptions of levels provided, which covers a range of languages and learning contexts, and the level of detail required for applications such as syllabus or test design, which this volume addresses.
    Citation
    Green A. (2012) Language Functions Revisited: Theoretical and Empirical Bases for Language Construct Definition Across the Ability Range. English Profile Studies 2. Cambridge: Cambridge University Press
    Publisher
    Cambridge University Press
    Journal
    English Profile Studies
    URI
    http://hdl.handle.net/10547/238354
    Additional Links
    http://www.cambridge.org/gb/elt/catalogue/subject/project/item6621262/Language-Functions-Revisited/?site_locale=en_GB¤tSubjectID=2558844
    Type
    Book
    Language
    en
    Series/Report no.
    English Profile Studies
    2
    ISBN
    9780521184991
    Collections
    CRELLA Centre for Research in English Language Learning and Assessment

    entitlement

     

    Related items

    Showing items related by title, author, creator and subject.

    • Thumbnail

      Validating a set of Japanese EFL proficiency tests: demonstrating locally designed tests meet international standards

      Dunlea, Jamie (University of BedfordshireUniversity of Bedfordshire, 2015-12)
      This study applied the latest developments in language testing validation theory to derive a core body of evidence that can contribute to the validation of a large-scale, high-stakes English as a Foreign Language (EFL) testing program in Japan. The testing program consists of a set of seven level-specific tests targeting different levels of proficiency. This core aspect of the program was selected as the main focus of this study. The socio-cognitive model of language test development and validation provided a coherent framework for the collection, analysis and interpretation of evidence. Three research questions targeted core elements of a validity argument identified in the literature on the socio-cognitive model. RQ 1 investigated the criterial contextual and cognitive features of tasks at different levels of proficiency, Expert judgment and automated analysis tools were used to analyze a large bank of items administered in operational tests across multiple years. RQ 2 addressed empirical item difficulty across the seven levels of proficiency. An innovative approach to vertical scaling was used to place previously administered items from all levels onto a single Rasch-based difficulty scale. RQ 3 used multiple standard-setting methods to investigate whether the seven levels could be meaningfully related to an external proficiency framework. In addition, the study identified three subsidiary goals: firstly, toevaluate the efficacy of applying international standards of best practice to a local context: secondly, to critically evaluate the model of validation; and thirdly, to generate insights directly applicable to operational quality assurance. The study provides evidence across all three research questions to support the claim that the seven levels in the program are distinct. At the same time, the results provide insights into how to strengthen explicit task specification to improve consistency across levels. This study is the largest application of the socio-cognitive model in terms of the amount of operational data analyzed, and thus makes a significant contribution to the ongoing study of validity theory in the context of language testing. While the study demonstrates the efficacy of the socio-cognitive model selected to drive the research design, it also provides recommendations for further refining the model, with implications for the theory and practice of language testing validation.
    • Thumbnail

      Linking writing and speaking in English as a Second Language assessment

      Hamp-Lyons, Liz (Hampton Press, 2012-03)
    • Thumbnail

      Developing a model for investigating the impact of language assessment within educational contexts by a public examination provider

      Saville, N.D. (University of BedfordshireUniversity of Bedfordshire, 2009-01)
      There is no comprehensive model of language test or examination impact and how it might be investigated within educational contexts by a provider of high-stakes examinations, such as an international examinations board. This thesis addresses the development of such a model from the perspective of Cambridge ESOL, a provider of English language tests and examinations in over 100 countries. The starting point for the thesis is a discussion of examinations within educational processes generally and the role that examinations board, such as Cambridge ESOL play within educational systems. The historical context and assessment tradition is an important part of this discussion. In the literature review, the effects and consequences of language tests and examinations are discussed with reference to the better known concept of washback and how impact can be defined as a broader notion operating at both micro and macro levels. This is contextualised within the assessment literature on validity theory and the application of innovation theories within educational systems. Methodologically, the research is based on a meta-analysis which is employed in order to describe and review three impact projects. These three projects were carried out by researchers based in Cambridge to implement an approach to test impact which had emerged during the 1990s as part of the test development and validation procedures adopted by Cambridge ESOL. Based on the analysis, the main outcome and contribution to knowledge is an expanded model of impact designed to provide examination providers with a more effective “theory of action”. When applied within Cambridge ESOL, this model will allow anticipated impacts of the English language examinations to be monitored more effectively and will inform on-going processes of innovation; this will lead to well-motivated improvements in the examinations and the related systems. Wider applications of the model in other assessment contexts are also suggested.
    DSpace software (copyright © 2002 - 2021)  DuraSpace
    Quick Guide | Contact Us
    Open Repository is a service operated by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.