ℹ️ Short Bio

Hi, I’m Yuanzhe, a second year master student at University of Califronia, San Diego 🔱.

I have the great honor of being collaborating with Prof. Yaoqing Yang from CS@Dartmouth College, Julian McAuley from CSE@UC San Diego, Zhiting Hu HDSI@UC San Diego, and Dr. Ren Pu from AI & Learning Systems@LBNL, Yu Wang from Memory Team@xAI, Zexue He from HAI@Stanford.

My current research is focused on

  • Understanding the mechanisms, training dynamics, and generalization of LLMs and SciML models via mathematical analysis. Building upon this theoretical foundation, We design advanced optimization algorithms to make LLM compression and training more efficient. Previous works include FARMS (ICML 2025) for layer-wise pruning, Model Balancing (EMNLP 2024 Oral) for low-resource fine-tuning, and ongoing comprehensive analysis of multi-regime dynamics in SciML models preparing for ICML 2026.

  • Enhancing Memory and Reasoning in LLMs and Agents, specifically by enabling models to process long-term history and achieve advanced reasoning capabilities through post-training. Previous works include M+ (ICML 2025) for long-term information retention, K2-Think (Tech Report) for large-scale reasoning, MIRIX (Open-source framework, 3K+ 🌟) for multi-agent memory systems, MemoryAgentBench (Under Review), 190+ 🌟 for comprehensive evaluation, and Mem-alpha (Under Review) for RL-based memory management.

My research leverages mathematical insights into LLMs to develop efficient algorithms, while simultaneously unlocking advanced memory and reasoning capabilities in LLMs and Agents. Moving forward, I aim to empower LLM Agents with superior memory and reasoning abilities by innovating across efficient optimization methods, effective frameworks, mechanisms, and interactive environment designs.

I am actively seeking for 26 Fall PhD Positions, industrial research internship after M.S graduation (about six months), and research collobration opportunities . Feel free to reach out!

📖 Educations

University of California, San Diego (UCSD)
M.S. in Computer Science and Engineering
2024.09 - 2026.03 (Expected)
Huazhong University of Science and Technology (HUST)
B.S. in Artificial Intelligence, Innovation Experimental Honor Class, Qiming School
GPA: 3.91/4.0
2020.09 - 2024.06

⚙️ Research Project

📖 Mathematical Analysis on LLMs and SciML Models

# denotes equal contribution

ICML 2025
sym

[1] Eigenspectrum Analysis of Neural Networks without Aspect Ratio Bias

Yuanzhe Hu, Kinshuk Goel, Vlad Killiakov, Yaoqing Yang

ICML 2025

Short Summary: A layer-wise LLM pruning method inspired by Marchenko–Pastur (MP) law.

Paper | Video | Review

Star Count

EMNLP 2024
sym

[2] Model Balancing Helps Low-data Training and Fine-tuning

{Zihang Liu#, Yuanzhe Hu#}, Tianyu Pang, Yefan Zhou, Pu Ren, Yaoqing Yang

EMNLP 2024 , Oral (168/6105=2.75%), Meta Review OA=5.0

Short Summary: Learning rate scheduler for LLM fine-tuning on low-source dataset.

Paper | Video | Review

Star Count

Under Construction
sym

[3] Multi-Regime Patterns in SciML Models: How Optimization Methods Address Distinct Failure Modes

Yuanzhe Hu (Co-First Author)

Manuscript in preparation (Targeting ICML 2026)

🤔 Enhancing Memory and Reasoning in LLMs and Agents

MIRIX
sym

MIRIX: Multi-Agent Memory System for LLM-Based Agents

My Contribution: Designed the framework for MIRIX’s Evaluation, project maintenance and bug solving.

Open-Source Project, 3K+ 🌟

Website Star Count Fork Count

Under Review
sym

[4] Evaluating Memory in LLM Agents via Incremental Multi-Turn Interactions

{Yuanzhe Hu#, Yu Wang#}, Julian McAuley

Under Review (8/6/6/4) / ICML 2025 LCFM Workshop

Short Summary: MemoryAgentBench is a new benchmark designed to comprehensively evaluate memory agents in LLMs.

Paper

Star Count HF Dataset Dataset Downloads

ICML 2025
sym

[5] M+: Extending MemoryLLM with Scalable Long-Term Memory

Yu Wang, Dmitry Krotov, Yuanzhe Hu, Yifan Gao, Wangchunshu Zhou, Julian McAuley, Dan Gutfreund, Rogerio Feris, Zexue He

ICML 2025

Short Summary: M+ enhances long-term information retention in LLMs by integrating a retriever-based long-term memory mechanism.

Paper | Review

Star Count Model Model Downloads 机器之心

Under Review
sym

[6] Mem-$\alpha$: Learning Memory Construction via Reinforcement Learning

Yu Wang, Ryuichi Takanobu, Zhiqi Liang, Yuzhen Mao, Yuanzhe Hu, Julian McAuley, Xiaojian Wu

Under Review

Short Summary: Mem-alpha, a reinforcement learning framework, enhances memory management in LLMs through interaction and feedback.

Paper

Star Count 量子位 机器之心

Tech Report
sym

[7] K2-Think: A Parameter-Efficient Reasoning System

Zhoujun Cheng, Richard Fan, Shibo Hao, Taylor W. Killian, Haonan Li, Suqi Sun, Hector Ren, Alexander Moreno, Daqian Zhang, Tianjun Zhong, Yuxin Xiong, Yuanzhe Hu, Yutao Xie, Xudong Han, Yuqi Wang, Varad Pimpalkhute, Yonghao Zhuang, Aaryamonvikram Singh, Xuezhi Liang, Anze Xie, Jianshu She, Desai Fan, Chengqian Gao, Liqun Ma, Mikhail Yurochkin, John Maggs, Xuezhe Ma, Guowei He, Zhiting Hu, Zhengzhong Liu, Eric P. Xing

MBZUAI IFM / LLM 360 Tech Report

Short Summary: K2-Think is a parameter-efficient reasoning system based on a 32B model.

Paper

Model Model Downloads NY Times Forbes

🔥 News

  • 2025.09:  😁 Excited to share that our recent work “K2-Think: A Parameter-Efficient Reasoning System”.
  • 2025.07:  😁 We open-sourced the MemoryAgentBench. Thanks for the great help from Yu Wang!
  • 2025.05:  🎉🎉 Two papers are accepted by ICML 2025 as Poster! See you at Vancouver.
  • 2024.09:  🎉🎉 Excited to share that our work “Model Balancing Helps Low-data Training and Fine-tuning” is accepted by EMNLP 2024 as Oral Presentation!
  • 2024.06:  😁 I graduated from HUST!
  • 2024.06:  😄 I created my account on OpenReview!

Last Update: 11/2025