Gongfan Fang

Ph.D. Candidate | xML Lab | National University of Singapore.

me.jpeg

Hi there! I'm Gongfan Fang, currently a third-year Ph.D. Candidate at the xML Lab @ National University of Singapore, under the supervision of Prof. Xinchao Wang (Presidential Young Professor) . Prior to joining LV Lab, I earned my Bachelor’s degree in 2019 and subsequently completed my Master’s degree in 2022 at the Visual Intelligence and Pattern Analysis (VIPA) Lab @ Zhejiang University, adviced by Prof. Mingli Song.

My research focuses on Efficient Deep Learning and Generative Models. I'm also actively contributing to several open-source projects such as Torch-Pruning, a framework for neural network pruning.




News

Feb, 2025 🍺 One first-author paper TinyFusion and three co-authored papers were accepted by CVPR’25.
Dec, 2024 🎵 I’m deeply honored to be awarded the 2024 ByteDance Scholarship (10~15 recipients per year).
Sep, 2024 🚀 Two first-author papers MaskLLM (Spotlight) and Remix-DiT were accepted by NeurIPS’24.

Selected Publications

  1. fang2024tinyfusion.png
    CVPR’25
    TinyFusion: Diffusion Transformers Learned Shallow
    Gongfan Fang, Kunjun Li, Xinyin Ma, and Xinchao Wang
    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025
    Compressing DiTs at 7% Training Costs | 2x Faster Inference
  2. fang2024maskllm.png
    NeurIPS’24
    MaskLLM: Learnable Semi-structured Sparsity for Large Language Models
    Advances in Neural Information Processing Systems, 2024
    NeurIPS Spotlight (2.08%) | Sparse LLMs via End-to-End Training
  3. fang2023depgraph.png
    CVPR’23
    DepGraph: Towards Any Structural Pruning
    Gongfan FangXinyin Ma, Mingli Song, Michael Bi Mi, and Xinchao Wang
    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023
    Automated Network Pruning | Top-5 on Github #Model-Compression
  4. ma2023deepcache.png
    CVPR’24
    DeepCache: Accelerating Diffusion Models for Free
    Xinyin MaGongfan Fang, and Xinchao Wang
    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024
    Training-free and almost lossless | 2-7x Speedup on Diffusion Models
  5. ma2023llm_pruner.png
    NeurIPS’23
    LLM-Pruner: On the Structural Pruning of Large Language Models
    Xinyin MaGongfan Fang, and Xinchao Wang
    Advances in Neural Information Processing Systems, 2023
    The First Structured Pruning Method for LLMs | Low-cost Pruning and Training

Education

2022.07 - Present - Ph.D. in Electrical and Computer Engineering, National University of Singapore.

2019.09 - 2022.04 - M.Eng. in Computer Science, College of Computer Science and Technology, Zhejiang University.

2015.09 - 2019.06 - B.S. in Computer Science, College of Computer Science and Technology, Zhejiang University.