Wei Guo 郭纬

prof_pic.jpg

🔥About Me

I am a Machine Learning Ph.D. student at Georgia Institute of Technology starting from fall 2023, advised by Professors Yongxin Chen and Molei Tao. Previously, I obtained my Bachelor’s degree in Statistics from the School of Mathematical Sciences, Peking University in 2023, where I was mentored by Professor Cheng Zhang. I was born in Ningbo, Zhejiang Province, P.R. China in 2001, and grew up there until I left for Beijing in 2019.

🔥 CV Please check here for my CV.

🔥 Academic Interests

I am broadly interested in fields ranging from statistics, probability, and machine learning. My current research interests include but are not limited to:

➡️ Statistics and Probability: Theoretical analysis and practical design of sampling algorithms (Markov chain Monte Carlo, non-equilibrium, or learning-based neural samplers); Applied stochastic analysis with applications to optimal transport, stochastic optimal control, and statistical physics.

➡️ Machine Learning: Generative modeling, with a particular focus on (continuous/discrete) diffusion and flow-based models and with applications to vision, language, and sciences.

🔥 Other Interests

I am also interested in languages (including: la langue française (the French language), vernaculars of Chinese and minority languages in China, and linguistics), political science, China’s railway system, civil aviation, architecture, and video games (in particular, action games such as The Last of Us, Assassin’s Creed, and the Метро (Metro) series). I am a fan of LE SSERAFIM. Finally, I am an enthusiast of traveling. Some of my highly recommended destinations that I have been to, and possessing profound historical and cultural heritage, include Hangzhou, Yangzhou, Datong, Macau, Washington, D.C., and my hometown Ningbo.

News

Jan 29, 2026 Two papers Complexity Analysis of Normalizing Constant Estimation: from Jarzynski Equality to Annealed Importance Sampling and beyond and Proximal Diffusion Neural Sampler accepted at ICLR 2026, see you in Rio de Janeiro 🇧🇷!
Oct 9, 2025 Two new papers Enhancing Reasoning for Diffusion LLMs via Distribution Matching Policy Optimization and Proximal Diffusion Neural Sampler are now available on arXiv.
Sep 27, 2025 I’m currently looking for an internship position for summer 2026! If you have any opportunities or suggestions, please feel free to reach out to me by email.

Selected Papers

2026

  1. ICLR 2026
    Complexity Analysis of Normalizing Constant Estimation: from Jarzynski Equality to Annealed Importance Sampling and beyond
    Wei GuoMolei Tao, and Yongxin Chen
    In The Fourteenth International Conference on Learning Representations, 2026

2025

  1. ICLR 2025
    Provable Benefit of Annealed Langevin Monte Carlo for Non-log-concave Sampling
    Wei GuoMolei Tao, and Yongxin Chen
    In The Thirteenth International Conference on Learning Representations, 2025
  2. Preprint
    Enhancing Reasoning for Diffusion LLMs via Distribution Matching Policy Optimization
    Yuchen Zhu*Wei Guo*Jaemoo Choi, Petr Molodyk, Bo Yuan, Molei Tao, and Yongxin Chen
    arXiv preprint arXiv:2510.08233, 2025
  3. NeurIPS 2025
    MDNS: Masked Diffusion Neural Sampler via Stochastic Optimal Control
    In The Thirty-ninth Annual Conference on Neural Information Processing Systems, 2025