profile picture of Leping Qiu

👋 Hi!

I am an incoming HCI PhD student at University of Toronto's DGP Lab, advised by Prof. Tovi Grossman.

Currently, I am working on generative and adaptive UIs to rethink how people use computers to learn, work, and create. Many exciting projects to come, stay tuned!

Previously, I worked on extended reality systems for learning and task guidance, as well as novel gaze-based interactions for pointing and accessibility. I received my MSc in Computer Science from University of Toronto and BEng in Computer Science from Tsinghua University. I've worked as a research assistant at Tsinghua Pervasive HCI Group and Stanford HCI Group.

Publications

Project Teaser Image

CHI 2025 Paper

MaRginalia: Enabling In-person Lecture Capturing and Note-taking Through Mixed Reality

Leping Qiu, Erin Seongyoon Kim, Sangho Suh, Ludwig Sidenmark, Tovi Grossman

Project Teaser Image

IEEE VR 2024 Paper

AMMA: Adaptive Multimodal Assistants Through Automated State Tracking and User Model-Directed Guidance Planning

Jackie (Junrui) Yang, Leping Qiu, Emmanuel Angel Corona-Moreno, Louisa Shi, Hung Bui, Monica S. Lam, James A. Landay

Project Teaser Image

UIST 2022 Paper

DEEP: 3D Gaze Pointing in Virtual Reality Leveraging Eyelid Movement

Xin Yi, Leping Qiu, Wenjing Tang, Yehan Fan, Hewu Li, Yuanchun Shi

Project Teaser Image

UIST 2022 Poster

One-Dimensional Eye-Gaze Typing Interface for People with Locked-in Syndrome

Michael Cross, Leping Qiu, Mingyuan Zhong, Yuntao Wang, Yuanchun Shi