Hi there👋I am Jiaxi Li, a Ph.D. student at the University of Georgia. I obtained my Bachelor’s degree in Computer Science from Shandong University in June 2024.

My research interests lie in Machine Reasoning and Large Language Models.

📝 Publications and Preprints

* Equal Contribution; $\dagger$ Corresponding Author.

Preprint
sym

Mitigating Hallucination Through Theory-Consistent Symmetric Multimodal Preference Optimization

Wenqi Liu, Xuemeng Song$\dagger$, Jiaxi Li, Yinwei Wei, Na Zheng, Jianhua Yin, Liqiang Nie

TL; DR...
  • Previous multimodal DPO approaches to mitigate hallucination (e.g., mDPO) have faced scrutiny for non-rigorous optimization objective function and indirect preference supervision.

  • To this end, we propose Symmetric Multimodal Preference Optimization (SymMPO), which introduces symmetric preference learning with direct supervision via response pairs, ensuring rigorous theoretical consistency with standard DPO.

Preprint
sym

Fact or Guesswork? Evaluating Large Language Models’ Medical Knowledge with Structured One-Hop Judgments

Jiaxi Li, Yiwei Wang, Kai Zhang, Yujun Cai, Bryan Hooi, Nanyun Peng, Kai-Wei Chang, Jin Lu

TL; DR...
  • We introduce the Medical Knowledge Judgment (MKJ) dataset, built from Knowledge Graphs of UMLS into one-hop questions, to directly evaluate LLMs’ factuality of medical knowledge without confounding reasoning effects.

  • Experiments reveal that LLMs struggle with accuracy, show poor calibration, and perform worse on rare medical concepts due to long-tail knowledge distribution and co-occurrence bias.

  • Retrieval-augmented generation significantly improves factual accuracy and reduces uncertainty, highlighting its potential for more reliable LLM use in medical scenarios.

EMNLP 2025 (Main)
sym

HELENE: Hessian Layer-wise Clipping and Gradient Annealing for Accelerating Fine-tuning LLM with Zeroth-order Optimization

Huaqin Zhao*, Jiaxi Li*, Yi Pan, Shizhe Liang, Xiaofeng Yang, Wei Liu, Xiang Li, Fei Dou, Tianming Liu, Jin Lu

TL; DR...
  • We introduce HELENE, an optimizer to accelerate fine-tuning LLMs with zeroth-order optimization by integrating an annealed Asymptotic Gauss-Newton-Bartlett (A-GNB) estimator for diagonal Hessian approximation and a layer-wise clipping mechanism for curvature-aware updates.

  • HELENE delivers up to 20× faster convergence than MeZO, along with an average 2.5% accuracy boost across tasks on RoBERTa-large and OPT-1.3B.

📖 Education

  • 2020.09 - 2024.06, Shandong University, B.E. in Computer Science and Technology.
  • 2024.08 - current, University of Georgia, Ph.D. in Computer Science.

💬 Selected Presentations

  • 2023.10, When do graph neural networks work on node classfication task and when not? [Blog] [知乎]
  • 2024.10, Scaling up test-time compute for LLM reasoning. [Slides]

📝 Services

  • Reviewer for ACL 2025, EMNLP 2025.

🎖 Honors and Awards

  • 2024.06 Outstanding graduate of Shandong Province.