Show simple item record

dc.contributor.authorKhabbazbashi, Nahal
dc.contributor.authorXu, Jing
dc.contributor.authorGalaczi, Evelina D.
dc.date.accessioned2020-11-12T10:14:28Z
dc.date.available2023-02-10T00:00:00Z
dc.date.available2020-11-12T10:14:28Z
dc.date.issued2021-02-10
dc.identifier.citationKhabbazbashi N, Xu J, Galaczi E (2021) 'Opening the black box: exploring automated speaking evaluation', in Lanteigne B, Coombe C, Brown JD (ed(s).). Issues in Language Testing Around the World: Insights for Language Test Users., Springer pp.-.en_US
dc.identifier.isbn9789813342316
dc.identifier.doi10.1007/978-981-33-4232-3
dc.identifier.urihttp://hdl.handle.net/10547/624618
dc.description.abstractThe rapid advances in speech processing and machine learning technologies have attracted language testers’ strong interest in developing automated speaking assessment in which candidate responses are scored by computer algorithms rather than trained human examiners. Despite its increasing popularity, automatic evaluation of spoken language is still shrouded in mystery and technical jargon, often resembling an opaque "black box" that transforms candidate speech to scores in a matter of minutes. Our chapter explicitly problematizes this lack of transparency around test score interpretation and use and asks the following questions: What do automatically derived scores actually mean? What are the speaking constructs underlying them? What are some common problems encountered in automated assessment of speaking? And how can test users evaluate the suitability of automated speaking assessment for their proposed test uses? In addressing these questions, the purpose of our chapter is to explore the benefits, problems, and caveats associated with automated speaking assessment touching on key theoretical discussions on construct representation and score interpretation as well as practical issues such as the infrastructure necessary for capturing high quality audio and the difficulties associated with acquiring training data. We hope to promote assessment literacy by providing the necessary guidance for users to critically engage with automated speaking assessment, pose the right questions to test developers, and ultimately make informed decisions regarding the fitness for purpose of automated assessment solutions for their specific learning and assessment contexts.en_US
dc.language.isoenen_US
dc.publisherSpringeren_US
dc.relation.urlhttps://www.springer.com/gp/book/9789813342316en_US
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectspeakingen_US
dc.subjectlanguage assessmenten_US
dc.subjectlearning technologyen_US
dc.subjectSubject Categories::X162 Teaching English as a Foreign Language (TEFL)en_US
dc.titleOpening the black box: exploring automated speaking evaluationen_US
dc.title.alternativeIssues in Language Testing Around the World: Insights for Language Test Users.en_US
dc.typeBook chapteren_US
dc.date.updated2020-11-12T10:09:53Z
dc.description.notehttps://www.springer.com/gp/open-access/publication-policies/self-archiving-policy archiving AAM permitted with 24m embargo


Files in this item

Thumbnail
Name:
Challenges+25+Khabbazbashi+Xu+ ...
Embargo:
2023-02-10
Size:
318.6Kb
Format:
PDF
Description:
author's accepted version

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivatives 4.0 International
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 International