Welcome
About Me
I am a Machine Learning Researcher at Apple/AIML, where I primarily work on Apple Foundation Model, specifically on agent tool use and instruction following.
I earned my Ph.D. in Computer Science from Columbia University, working with Prof. Zhou Yu. I was also a member of the Natural Language Processing (NLP) Group. My research mainly focused on natural language generation and conversational AI. Particularly, I was interested in building task-oriented dialogue systems with limited resources.
Prior to Columbia University, I received my B.S. in Physics from Nanjing University and M.S. in Electrical and Computer Engineering from Johns Hopkins University, working with Prof. Sanjeev Khudanpur and Prof. Daniel Povey.
For more details, please see my full CV.
News
- Jan 2025: Paper Bottom-Up Synthesis of Knowledge-Grounded Task-Oriented Dialogues with Iteratively Self-Refined Prompts accepted in NAACL 2025.
- Sep 2024: Paper Consent in crisis: The rapid decline of the ai data commons accepted in NeurIPS 2024 (Datasets and Benchmarks Track).
- Sep 2024: Paper DECOR: Improving Coherence in L2 English Writing with a Novel Benchmark for Incoherence Detection, Reasoning, and Rewriting accepted in EMNLP 2024.
- Sep 2024: Paper VarBench: Robust Language Model Benchmarking Through Dynamic Variable Perturbation accepted in EMNLP 2024 Findings.
- Aug 2024: I have passed my PhD Dissertation Defense Constructing Task-Oriented Dialogue Systems with Limited Resources.
- Jan 2024: Paper DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection for Conversational AI accepted in EACL 2024.
