ℹ️ Short Bio
Hi, I’m Yuanzhe, a master student at University of Califronia, San Diego. I have the great honor of being collaborating with Prof. Yaoqing Yang, Julian McAuley, Zhiting Hu, and Dr. Ren Pu, Yu Wang, Zexue He
My current research is focused on
- Understanding the mechanisms, dynamics and generalization of LLM pruning and post-training via mathematical analysis.
- Memory and Reasoning in LLM & Agent.
I am actively seeking for 26 Fall CS/ECE/DS PhD Positions, industrial research internship after M.S graduation (about six months), and collobration opportunities on Agentic Learning. Feel free to reach out!
📖 Educations
|
University of California, San Diego (UCSD) M.S. in Computer Science and Engineering |
2024.09 - 2026.03 (Expected) |
|
Huazhong University of Science and Technology (HUST) B.S. in Artificial Intelligence, Innovation Experimental Honor Class, Qiming School GPA: 3.91/4.0 |
2020.09 - 2024.06 |
⚙️ Open Source Project
|
MIRIX: Multi-Agent Memory System for LLM-Based Agents My Contribution: Designed the framework for MIRIX's Evaluation, project maintenance and bug solving. |
📝 Writing Samples
Leading Authored
# denotes equal contribution



[3] Evaluating Memory in LLM Agents via Incremental Multi-Turn Interactions
{Yuanzhe Hu#, Yu Wang#}, Julian McAuley
Under Review (8/6/6/4) / ICML 2025 LCFM Workshop
Short Summary: MemoryAgentBench is a new benchmark designed to comprehensively evaluate memory agents in LLMs.
Contributed

[4] Mem-$\alpha$: Learning Memory Construction via Reinforcement Learning
Yu Wang, Ryuichi Takanobu, Zhiqi Liang, Yuzhen Mao, Yuanzhe Hu, Julian McAuley, Xiaojian Wu
Under Review
Short Summary: Mem-alpha, a reinforcement learning framework, enhances memory management in LLMs through interaction and feedback.

[5] K2-Think: A Parameter-Efficient Reasoning System
Zhoujun Cheng, Richard Fan, Shibo Hao, Taylor W. Killian, Haonan Li, Suqi Sun, Hector Ren, Alexander Moreno, Daqian Zhang, Tianjun Zhong, Yuxin Xiong, Yuanzhe Hu, Yutao Xie, Xudong Han, Yuqi Wang, Varad Pimpalkhute, Yonghao Zhuang, Aaryamonvikram Singh, Xuezhi Liang, Anze Xie, Jianshu She, Desai Fan, Chengqian Gao, Liqun Ma, Mikhail Yurochkin, John Maggs, Xuezhe Ma, Guowei He, Zhiting Hu, Zhengzhong Liu, Eric P. Xing
MBZUAI IFM / LLM 360 Tech Report
Short Summary: K2-Think is a parameter-efficient reasoning system based on a 32B model.

[6] M+: Extending MemoryLLM with Scalable Long-Term Memory
Yu Wang, Dmitry Krotov, Yuanzhe Hu, Yifan Gao, Wangchunshu Zhou, Julian McAuley, Dan Gutfreund, Rogerio Feris, Zexue He
ICML 2025
Short Summary: M+ enhances long-term information retention in LLMs by integrating a retriever-based long-term memory mechanism.
🔥 News
- 2025.09: 😁 Excited to share that our recent work “K2-Think: A Parameter-Efficient Reasoning System”.
- 2025.07: 😁 We open-sourced the MemoryAgentBench. Thanks for the great help from Yu Wang!
- 2025.05: 🎉🎉 Two papers are accepted by ICML 2025 as Poster! See you at Vancouver.
- 2024.09: 🎉🎉 Excited to share that our work “Model Balancing Helps Low-data Training and Fine-tuning” is accepted by EMNLP 2024 as Oral Presentation!
- 2024.06: 😁 I graduated from HUST!
- 2024.06: 😄 I created my account on OpenReview!
Last Update: 11/2025