Hello, I am Xin Zhang (pronunciation ≈ "shin chang"), an assistant professor at Peking University. Before joining Peking, I was a postdoctoral associate at MIT CSAIL working with Prof. Armando Solar-Lezama. I received my Ph.D. from Georgia Tech under the supervision of Prof. Mayur Naik. I am broadly interested in topics related to programming languages and software engineering.

Over the past five years, my main research focus has been a new paradigm of program analysis that incorporates probabilistic reasoning into conventional abstract-interpretation-based program analysis. This paradigm, named Bayesian Program Analysis, enables program analyses to quantify results confidence and learn from external information. Around this paradigm, I have built applications in program analysis [OOPSLA'25a], fuzzing [POPL'26a], and fault localization [TSE'25,ICSE'22], developed algorithms for abstraction selection [OOPSLA'24a, OOPSLA'25b], question selection [OOPSLA'26], and efficient inference [ASE'25], and theories for abstract interpretation with confidence (conditionally accepted to PLDI'26). My other research interests include optimizing domain-specific languages for program synthesis [POPL'26b], artificial intelligence explainability [AAAI'25], autoformalization with LLMs [TOPLAS'26], and abstraction selection for traditional program analyses [SAS'21, OOPSLA'24b]. For details, please see Research.

Ph.D. Undergraduates
  • Zhiyi Li
  • Ziyue Jin
Alumni
  • Zirui Zhou (Undergraduate, now Ph.D. Student@UIUC
  • Yaoxuan Wu (Undergraduate, now Ph.D. Student@UCLA)
  • Yifan Chen (M.S., co-adivised with Yingfei Xiong) [SAS'21]
  • Zhentao Ye (M.S., co-adivised with Yingfei Xiong) [POPL'26b]
  • Introduction to Probabilistic Programming (Spring 2025)
  • Introduction to Discrete Mathematics (Fall 2024)
March 2026
New paper to appear in TOPLAS about applying LLMs to generate loop invariants with autoformalization.
March 2026
New paper conditionally accepted at PLDI'26.
Feb 2026
New paper at OOPSLA'26 on incorporating the exploration-exploitation scheme in Bayesian program analysis to improve the learning effectiveness.
Oct 2025
Two papers conditionally accepted at POPL'26.
Oct 2025
Two new papers at OOPSLA'25. One on how to combine informal information and formal information in program analysis. The other on how to apply CEGAR to find good representations for Bayesian program analyses.
Aug 2025
New paper at ASE'25 on how to exploit logical structures in constraints to accelerate probabilistic inference.
June 2025
New TSE paper on how to apply Bayesin reasoning to fault localization.
Dec 2024
New paper on adding temporal information to machine learning model explanations at AAAI'25.
June 2024
I have been selected as a winner of the 2015-2016 Facebook Fellowship. Thank you, Facebook!