Assistant Professor
Department of Electrical and Computer Engineering
Department of Computer Science and Engineering (Cooperating)
Department of Statistics (Cooperating)
University of California, Riverside
Google Scholar GitHub
Email: yzhu@ucr.edu
Office: Winston Chung Hall, Room 431
I am an assistant professor in ECE, CSE, and STAT departments at UC Riverside. I received my Ph.D. in CS from the University of Wisconsin–Madison in 2023, where I was advised by Robert Nowak. I also spent time as a research intern at Microsoft Research NYC, where I was mentored by Paul Mineiro, Dylan Foster, and John Langford.
My research focuses on machine learning, reinforcement learning, and foundation models (e.g., LLMs), with an emphasis on developing efficient and reliable learning algorithms and systems for large-scale, multimodal problems. I am particularly interested in exploring the following research areas:
LLM-powered learning and decision making agents
Test-time scaling and alignment
Multimodal learning and data selection
Efficient algorithms for training and inference
\(\bigstar\) Prospective Students. I am looking for Ph.D. students and visitors/interns who are generally interested in ML/RL/LLMs. If you are interested in joining my lab, please (i) fill out this form and (ii) drop me an email with your CV and transcripts.
Jiancheng Zhang (Fall 2024 - present), Bowen Zuo (Fall 2024 - present)
Strategic Scaling of Test-Time Compute: A Bandit Learning Approach
Bowen Zuo and Yinglun Zhu
Chain-of-region: Visual Language Models Need Details for Diagram Analysis
Xue Li, Yiyou Sun, Wei Cheng, Yinglun Zhu, and Haifeng Chen
International Conference on Learning Representations (ICLR) 2025
Efficient Sparse PCA via Block-Diagonalization
Alberto Del Pia, Dekun Zhou, and Yinglun Zhu
International Conference on Learning Representations (ICLR) 2025
Efficient Sequential Decision Making with Large Language Models
Dingyang Chen, Qi Zhang, and Yinglun Zhu
Conference on Empirical Methods in Natural Language Processing (EMNLP) 2024
An Experimental Design Framework for Label-Efficient Supervised Finetuning of Large Language Models
Gantavya Bhatt, Yifang Chen, Arnav M. Das, Jifan Zhang, Sang T. Truong, Stephen Mussmann, Yinglun Zhu, Jeffrey Bilmes, Simon S. Du, Kevin Jamieson, Jordan T. Ash, and Robert Nowak
Findings of the Association for Computational Linguistics: ACL (ACL Findings) 2024
LabelBench: A Comprehensive Framework for Benchmarking Adaptive Label-Efficient Learning
Jifan Zhang, Yifang Chen, Gregory Canal, Arnav Das, Gantavya Bhatt, Stephen Mussmann, Yinglun Zhu, Jeffrey Bilmes, Simon Du, Kevin Jamieson, and Robert Nowak
Journal of Data-Centric Machine Learning Research 2024
\(\bigstar\) Also selected for a Poster Award at Midwest ML Symposium 2024
Infinite Action Contextual Bandits with Reusable Data Exhaust
Mark Rucker, Yinglun Zhu, and Paul Mineiro
International Conference on Machine Learning (ICML) 2023
Active Learning with Neural Networks: Insights from Nonparametric Statistics
Yinglun Zhu and Robert Nowak
Conference on Neural Information Processing Systems (NeurIPS) 2022
Efficient Active Learning with Abstention
Yinglun Zhu and Robert Nowak
Conference on Neural Information Processing Systems (NeurIPS) 2022
Contextual Bandits with Large Action Spaces: Made Practical
Yinglun Zhu, Dylan Foster, John Langford, and Paul Mineiro
International Conference on Machine Learning (ICML) 2022, [Code]
\(\bigstar\) Now available at the leading machine learning library Vowpal Wabbit (see here for instructions) and commercially incorporated into Microsoft Azure Personalizer!
Contextual Bandits with Smooth Regret: Computational Efficiency in Continuous Action Spaces
Yinglun Zhu and Paul Mineiro
International Conference on Machine Learning (ICML) 2022, [Code]
\(\bigstar\) Selected for a full oral presentation (top 2.1%)
Near Instance Optimal Model Selection for Pure Exploration Linear Bandits
Yinglun Zhu, Julian Katz-Samuels, and Robert Nowak
International Conference on Artificial Intelligence and Statistics (AISTATS) 2022, [Code]
Pareto Optimal Model Selection in Linear Bandits
Yinglun Zhu and Robert Nowak
International Conference on Artificial Intelligence and Statistics (AISTATS) 2022, [Code]
Pure Exploration in Kernel and Neural Bandits
Yinglun Zhu\(^\star\), Dongruo Zhou\(^\star\), Ruoxi Jiang\(^\star\), Quanquan Gu, Rebecca Willett, and Robert Nowak
Conference on Neural Information Processing Systems (NeurIPS) 2021, [Code]
On Regret with Multiple Best Arms
Yinglun Zhu and Robert Nowak
Conference on Neural Information Processing Systems (NeurIPS) 2020, [Code]
Robust Outlier Arm Identification
Yinglun Zhu, Sumeet Katariya, and Robert Nowak
International Conference on Machine Learning (ICML) 2020, [Code]
Spring 2025: EE 260 Large Models and Advances in AI
Winter 2025: EE 114 Probability, Random Variables, and Random Processes in Electrical Engineering
Fall 2024: EE/CS 228 Introduction to Deep Learning
Spring 2024: EE 260 Large Models and Advances in AI
Fall 2023: EE/CS 228 Introduction to Deep Learning