• The English Benchmarking Study in Maltese Schools: Technical Report 2015

      Khabbazbashi, Nahal; Khalifa, Hanan; Robinson, M.; Ellis, S.; Cambridge English Language Assessment (Cambridge English Language Assessment, 2016-04-15)
      This is a report for a project between Cambridge English Language Assessment and the Maltese Ministry for Education and Employment [Nahal Khabbazbashi was principal investigator for project].
    • Exploring the use of video-conferencing technology in the assessment of spoken language: a mixed-methods study

      Nakatsuhara, Fumiyo; Inoue, Chihiro; Berry, Vivien; Galaczi, Evelina D.; University of Bedfordshire; British Council; Cambridge English Language Assessment (Taylor & Francis, 2017-02-10)
      This research explores how internet-based video-conferencing technology can be used to deliver and conduct a speaking test, and what similarities and differences can be discerned between the standard and computer-mediated face-to-face modes. The context of the study is a high-stakes speaking test, and the motivation for the research is the need for test providers to keep under constant review the extent to which their tests are accessible and fair to a wide constituency of test takers. The study examines test-takers’ scores and linguistic output, and examiners’ test administration and rating behaviors across the two modes. A convergent parallel mixed-methods research design was used, analyzing test-takers’ scores and language functions elicited, examiners’ written comments, feedback questionnaires and verbal reports, as well as observation notes taken by researchers. While the two delivery modes generated similar test score outcomes, some differences were observed in test-takers’ functional output and the behavior of examiners who served as both raters and interlocutors.
    • International assessment and local contexts: a case study of an English language initiative in higher education institutes in Egypt

      Khalifa, Hanan; Khabbazbashi, Nahal; Abdelsalam, Samar; Said, Mohsen Elmahdy; Cambridge English Language Assessment; Cairo University (Association for Language Testing and Assessment of Australia and New Zealand, 2015-11-07)
      Within the long-term objectives of English language reform in higher education (HE) institutes across Egypt and increasing employability in the global job market, the Center for Advancement of Postgraduate Studies and Research in Cairo University (CAPSCU), Cambridge English Language Assessment and the British Council (Egypt) have implemented a multi-phase upskilling program aimed at enhancing the workplace language skills of socially disadvantaged undergraduates, developing teachers’ pedagogical knowledge and application, providing both students and teachers with a competitive edge in the job markets through internationally recognised certification and the introduction of 21st century skills such as digital-age literacy and effective communication in HE, and, lastly, integrating international standards for teaching, learning and assessment within the local context. This paper reports on a mixed methods research study aimed at evaluating the effectiveness of this initiative and its impact at the micro and macro levels. The research focused on language progression, learner autonomy, motivation towards digital learning and assessment, improvements in pedagogical knowledge and teaching practices. Standardised assessment, attitudinal and perceptions surveys, and observational data were used. Findings suggested a positive impact of the upskilling program, illustrated how international collaborations can provide the necessary skills for today’s global job market, and highlighted areas for consideration for upscaling the initiative.
    • Rating scale development: a multistage exploratory sequential design

      Galaczi, Evelina D.; Khabbazbashi, Nahal; Cambridge English Language Assessment (Cambridge University Press, 2016-03-01)
      The project chosen to showcase the application of the exploratory sequential design in second/ foreign (L2) language assessment comes from the context of rating scale development and focuses on the development of a set of scales for a suite of high-stakes L2 speaking tests. The assessment of speaking requires assigning scores to a speech sample in a systematic fashion by focusing on explicitly defined criteria which describe different levels of performance (Ginther 2013). Rating scales are the instruments used in this evaluation process, and they can be either holistic (i.e. providing a global overall assessment) or analytic (i.e. providing an independent evaluations for a number of assessment criteria, e.g. Grammar, Vocabulary, Organisation, etc.). The discussion in this chapter is framed within the context of rating scales in speaking assessment. However, it is worth noting that the principles espoused, stages employed and decisions taken during the development process have wider applicability to performance assessment in general.