Innovative research in language assessment is the foundation of the Duolingo English Test. We combine research and technology to make English testing more reliable, fair and secure. In order to support the next generation of researchers in language assessment, we're thrilled to announce the winners of our first Doctoral Awards Program!

Through these dissertation awards, the Duolingo English Test supports doctoral students who are pushing the boundaries of language testing research. This year, we awarded grants to twelve PhD students whose work promotes better language teaching, learning, and assessment:

Read on to learn more about our 2020 grant recipients!

Jorge Luis Beltrán Zuniga

Teachers College, Columbia University
Assessing L2 Spoken Argumentation: Exploring the Role of Assistance in Learning-Oriented Assessment of Key Competencies

Jorge’s work examines whether scenario-based testing provides an effective measure of a test taker’s competency in oral argumentation–that is, building an argument from evidence and then presenting and defending it with others. Jorge hopes to shed light on the nature of L2 learners’ engagement with complex processes related to this important context of language use.

Shi Chen

English Department, Northern Arizona University
Development and Validation of a Web-based L2 Pragmatic Speaking Test in the University Context

Responding to concerns about the practicality and validity of L2 pragmatic speaking assessment, Shi has designed a test for English as a Second Language (ESL) students that measures pragmatic competence in spoken interaction. By balancing construct-representation and practicability, Shi aims to explore the possibility of administering L2 pragmatic speaking tests online. This study also highlights how conversation analysis’s analytical principles and findings were utilized in developing the test.

Phuc Diem Le

Faculty of Humanities and Social Sciences, The University of Queensland
Rating behaviour and rater cognition - The case of an English speaking test in Vietnam

Phuc’s work investigates the rating behavior of raters from similar backgrounds and its potential link with their cognition. By focusing on raters from similar backgrounds, Phuc aims to determine whether all variation can be explained through background variables, and to examine the impact of cognition.

Wenjun Ding

School of Education, University of Bristol
Exploring cognitive validity of computerized causal explanation speaking tasks for young EFL learners in China: An eye-tracking study

Wenjun’s work is about developing computerized causal explanation speaking tasks. This innovative type of computerized picture-based speaking task prompts young EFL test-takers to explain cause and effect. Wenjun uses a mixed-methods design with eye-tracking and stimulated recalls to explore the cognitive validity of this task type.

Melissa Hunte

Ontario Institute for Studies in Education, University of Toronto
Examination of the Relationship Between Preliterate Children’s Oral Language Performance and Developmental Dyslexia using NLP

Melissa seeks to address a gap in understanding of how dyslexia relates to language deficiencies. By examining the acoustic and linguistic dimensions of preliterate children’s speech, her work expands research into the use of Natural Language Processing (NLP) to analyze speech patterns in children with developmental dyslexia.

Hyunah Kim

Ontario Institute for Studies in Education, University of Toronto
Investigating differential item functioning due to cultural familiarity on reading comprehension test

Hyunah’s work investigates the extent to which items in a provincial reading test function differently across multiple student groups with different levels of familiarity with mainstream Canadian culture. She hopes this research will help ensure a fair score use and interpretation of language tests for students with diverse cultural backgrounds such as immigrants, refugees, and international students.

Santi Lestari

Department of Linguistics and English Language, Lancaster University
Operationalizations of reading-into-writing into rating tools

Santi seeks to gain further insight into human rating of reading-into-writing tasks, to aid efforts in developing automated scoring systems for this innovative task type. By examining rater reliability, consistency, and cognition, her work investigates how the reading-into-writing construct is operationalized in rating scales.

Sha Liu

School of Education, University of Bristol
Exploring L2 learners' engagement with automated feedback: An eye tracking study

Sha’s work pilots the combination of eye tracking and stimulated recall to explore how Chinese EFL learners engage with Automated Writing Evaluation feedback in the process of essay revision. Her study examines how Chinese EFL learners engage with this feedback in terms of their attention to, cognitive effort expenditure in, and revision responses to such feedback, and the underlying factors that may affect their engagement.

Chaina Santos Oliveira

Centro de Informática, Universidade Federal de Pernambuco
Item Response Theory Model to Evaluate Speech Synthesis and Recognition

Chaina’s work proposes a new methodology of speech synthesis and recognition evaluation that applies Item Response Theory (IRT) from psychometrics to evaluate speech synthesizers, automatic speech recognition (ASR) systems, and sentences simultaneously. She hopes that this methodology can be applied to evaluate the listening and speaking abilities of Duolingo English Test takers.

Yuna Patricia Seong

Teachers College, Columbia University
Using a Scenario-Based Assessment Approach to Examine the Cognitive Dimension of Second Language Academic Speaking Ability through the Assessment of an Integrated Academic Speaking Competency

Yuna’s work examines the role of strategic competence--an integral part of L2 ability because of its direct influence on performance. Using an online, scenario-based academic speaking test, Yuna explores the cognitive dimension of L2 academic speaking ability by assessing test-takers’ performance, as well as the relationships between cognitive and metacognitive strategy use and speaking performance.

Ji-young Shin

Department of English, Purdue University
Investigating Elicited Imitation in Measuring Linguistically Diverse Examinees’ L2 Processing Competence: Item Response Theory and Random Forest Approaches

Because little is known about the influence on EI item discrimination in measuring accuracy and fluency, Ji-young seeks to examine CAF-based EI as a measure of processing competence. She focuses on the influences of scoring methods and prompt linguistic features on EI item measurement qualities, as well as relationships among CAF features. She hopes this work will promote fine-tuned test development and L2 proficiency research.

Leila Zohali

School of Languages and Linguistics, The University of Melbourne
Investigating the Pedagogical Usefulness of Automated Writing Evaluation (AWE) Systems in Academic Writing Instruction

Leila’s work investigates the efficacy of incorporating automated written corrective feedback into language classes. She uses automated feedback, as well as teachers’ and students’ perceptions of automated feedback, to examine whether language proficiency affects students’ level of engagement and accuracy improvement.

We are so inspired by the work these scholars are pursuing, and we can’t wait to see how their projects develop! To learn more about our winners’ research, be sure to follow us on social media (Facebook, Instagram, and Twitter) and check out our research page. Congratulations to our 2020 recipients!