The Duolingo English Test has revolutionized the world of high stakes assessment by combining research and technology to make a fair, reliable test that you can take securly, on-demand, anywhere in the world, for only $49 USD.
Because innovation in language assessment drives our work, we are committed to supporting the next generation of researchers in this field. We’re thrilled to announce the winners of our second annual Doctoral Awards Program! (See last year’s winners here.)
Each of these fourteen PhD students is working to promote better language teaching, learning, and assessment through innovative research:
- John Dylan Burton
- Aritra Ghosh
- Lu Han
- Aynur Ismayilli Karakoç
- Xiaomeng Li
- Wenyue Ma
- Xuan Minh Ngo
- Giulia Peri
- Bruce Russell
- Yu Tian
- Xingcheng Wang
- Johanathan Woodworth
- Sharon Sin Ying Wong
- Soohye Yeom
Read on to learn more about our 2021 winners!
A quick note: Much of the research here references L2 learning and assessments. L2 refers to a person’s second language, or what we at Duolingo call “learning language.”
John Dylan Burton
Michigan State University
Exploring the influence of nonverbal behavior on language proficiency scores
Dylan’s research investigates how non-verbal behaviors, in particular eye gaze and facial expressions, affect how linguistic performance is evaluated in online language proficiency exams. Using machine learning technology and rater’s language guidelines he hopes to better understand the impact of non-verbal behaviors on raters as they assess L2 ability.
Aritra Ghosh
University of Massachusetts Amherst
Beyond informativeness metrics in computerized adaptive testing
Computer adaptive tests create personalized exams that are considerably shorter than traditional standardized tests, by selecting the next most informative question for each test taker based on their previous responses. Currently, some of the question selection algorithms for these adaptive tests are static, meaning they cannot improve by learning from response data. Aritra proposes a new, trainable selection algorithm that has the potential to make adaptive tests even more fair and accurate for test takers.
Lu Han
Temple University
Using authenticated spoken texts to assess Chinese L2 listening
Listening tasks are a key component of language proficiency exams, but the listening texts used in large-scale Chinese language tests often do not resemble real-world spoken language. Lu seeks to gather evidence to create better listening tasks by infusing standard dialogues with important spoken language features. She hopes her work will provide teachers and test developers with more insights on the particular challenges L2 Chinese listeners encounter in the real world.
Aynur Ismayilli Karakoç
Victoria University of Wellington
Designing and validating an integrated reading-writing test for first-year students at a New Zealand university
Integrated writing (IW) test tasks are designed to assess complex abilities that students will use in academic settings, such as selecting, synthesizing, and summarizing information from source material. Aynur seeks to develop an IW test task and scoring rubric that are more representative of real-world academic tasks. She hopes this research will inform how early assessment programs prepare students for mainstream courses.
Xiaomeng Li
Carnegie Mellon University
The effects of L1 writing system and L2 linguistic knowledge on L2 word recognition development: A construct-driven assessment approach
Xiaomeng aims to better understand the sources of difficulty in learning to read in a second language, specifically by examining L2 word recognition. This work sheds light on how our brains process written information. Xiameng hopes her findings will support L2 instructors and learners as they disentangle the source of difficulties at different learning stages.
Wenyue Ma
Michigan State University
An investigation into a Chinese placement test’s score interpretations and uses
Wenyue’s work examines listening and reading test items on college-level Chinese placement tests. By identifying problematic items and providing an overall evaluation of the intended interpretation and use of test scores, Wenyue’s research contributes to the ongoing development of validity research.
Xuan Minh Ngo
The University of Queensland
A sociocultural analysis of teacher assessment literacy development: A narrative inquiry into novice Vietnamese EFL teachers’ assessment perezhivanie
Minh takes a sociocultural lens to examine the effect of social environment and personal relationships on new English as a Second Language instructors as they develop something called teacher assessment literacy (TAL) — that is, the ability to use assessments to evaluate and support student learning. In highlighting the often-overlooked emotional dimension of developing TAL, Minh hopes to inform how policy makers, institutions, and educators promote improved assessment literacy.
Giulia Peri
University of Foreigners of Siena
Topical knowledge in L2 Italian speaking performance: a scenario-based language assessment test for L2 Italian.
Giulia’s research lies at the intersection of applied linguistics and language learning and teaching. She seeks to analyze the relationship between L2 Italian students’ topical knowledge, and their speaking performance on academic tests. Her work can help researchers better understand the role topical knowledge plays in Italian L2 tests, and provide feedback on how these tests are designed.
Bruce Russell
University of Toronto
Investigating support for international students
Bruce’s research examines how English as a second language students’ performance in language support programs and language proficiency exams relates to their future academic performance at university. He hopes that a deeper understanding of the relationship between test scores and academic performance will help students better prepare for the admissions process, and inform how early intervention programs are designed to support students.
Yu Tian
Georgia State University
Argumentation in L1 and L2 adult writers: A process-based perspective
Constructing robust arguments is an important marker of academic writing ability, which is why many high-stakes standardized tests feature a written component. Yu’s research investigates links between keystroke logs, the production of arguments, and writing quality. His findings will shed light on the cognitive activities underlying adult writers' argument development, and could inform automatic essay scoring systems.
Xingcheng Wang
University of Melbourne
The development and validation of an automated dialogue system for assessing English L2 interactional competence
Most people communicate through text-based interactions (like instant messaging), yet there is very little research investigating L2 learners using these interactions to evaluate language proficiency and interactional competence. Xingcheng has developed an automated dialogue system to expand and measure IC in these settings, in an effort to promote enhanced test practicality and standardization.
Johanathan Woodworth
York University
Hybrid feedback: the efficacy of combining Automated Writing Corrective Feedback (AWCF) and teacher feedback for writing development.
Experts agree that students need practice and feedback to develop their writing skills, but teachers don’t always have time to comment on multiple drafts. Automatic Writing Evaluation (AWE) systems can help, by providing immediate feedback on everything from grammar to idea development, but they may not always be accurate. Johanathan’s work evaluates how hybrid feedback — a combination of instructor and AWE feedback — can mitigate these issues. His research contributes to a wider understanding of the effectiveness of using this hybrid approach.
Sharon Sin Ying Wong
University of Bristol
Role of visuals in listening assessment: An eye-tracking study on L2 test takers’ cognitive processes
With the development of testing technology, an increasing number of listening tests are using pictures and other visual elements as part of listening input. To complete these tasks, test takers must make sense of the visual representation while listening to the recording, even though this isn’t the skill that these tasks are designed to assess. Sharon’s research examines how visual-spatial skills affect test takers’ performance on these test items to help inform and improve the design of international language proficiency tests.
Soohye Yeom
New York University
Using international English proficiency tests in EMI contexts: a comparison of tasks and student performance on TOEFL iBT writing tasks and course assignments in Korean universities.
As English-medium instruction (EMI) becomes more common in higher education institutions throughout East Asia, many universities in the region are making admissions decisions based on international English proficiency tests that were not originally designed for this context. Soohye’s research investigates whether an international English proficiency test can reflect relevant skills and abilities in university EMI courses. Her findings will contribute to research exploring the validity of using international English proficiency tests for making decisions in EMI settings.
We can’t wait to see how these projects develop! Be sure to follow us on social media (Facebook, Instagram, and Twitter) and check out our research page!