Publications

2025

  1. fang2024tinyfusion.png
    CVPR’25
    TinyFusion: Diffusion Transformers Learned Shallow
    Gongfan Fang, Kunjun Li, Xinyin Ma, and Xinchao Wang
    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025
    Compressing DiTs at 7% Training Costs | 2x Faster Inference
  2. chen2024collaborative.png
    CVPR’25
    Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient
    Zigeng ChenXinyin MaGongfan Fang, and Xinchao Wang
    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025

2024

  1. fang2024maskllm.png
    NeurIPS’24
    MaskLLM: Learnable Semi-structured Sparsity for Large Language Models
    Advances in Neural Information Processing Systems, 2024
    NeurIPS Spotlight (2.08%) | Sparse LLMs via End-to-End Training
  2. fang2024remixdit.png
    NeurIPS’24
    Remix-DiT: Mixing Diffusion Transformers for Multi-Expert Denoising
    Gongfan FangXinyin Ma, and Xinchao Wang
    Advances in Neural Information Processing Systems, 2024
  3. fang2024isomorphic.png
    ECCV’24
    Isomorphic Pruning for Vision Models
    Gongfan FangXinyin Ma, Michael Bi Mi, and Xinchao Wang
    European Conference on Computer Vision, 2024
  4. chen2024asyncdiff.png
    NeurIPS’24
    AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising
    Zigeng ChenXinyin MaGongfan Fang, Zhenxiong Tan, and Xinchao Wang
    Advances in Neural Information Processing Systems, 2024
  5. ma2024learning.png
    NeurIPS’24
    Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching
    Xinyin MaGongfan Fang, Michael Bi Mi, and Xinchao Wang
    Advances in Neural Information Processing Systems, 2024
  6. chen20230.png
    NeurIPS’24
    SlimSam: 0.1% Data Makes Segment Anything Slim
    Zigeng ChenGongfan FangXinyin Ma, and Xinchao Wang
    Advances in Neural Information Processing Systems, 2024
  7. ma2023deepcache.png
    CVPR’24
    DeepCache: Accelerating Diffusion Models for Free
    Xinyin MaGongfan Fang, and Xinchao Wang
    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024
    Training-free and almost lossless | 2-7x Speedup on Diffusion Models
  8. tan2024litefocus.png
    InterSpeech’24
    LiteFocus: Accelerated Diffusion Inference for Long Audio Synthesis
    Zhenxiong Tan, Xinyin MaGongfan Fang, and Xinchao Wang
    Conference of the International Speech Communication Association, 2024

2023

  1. fang2023depgraph.png
    CVPR’23
    DepGraph: Towards Any Structural Pruning
    Gongfan FangXinyin Ma, Mingli Song, Michael Bi Mi, and Xinchao Wang
    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023
    Automated Network Pruning | Top-5 on Github #Model-Compression
  2. ma2023llm_pruner.png
    NeurIPS’23
    LLM-Pruner: On the Structural Pruning of Large Language Models
    Xinyin MaGongfan Fang, and Xinchao Wang
    Advances in Neural Information Processing Systems, 2023
    The First Structured Pruning Method for LLMs | Low-cost Pruning and Training
  3. fang2023structural.png
    NeurIPS’23
    Structural Pruning for Diffusion Models
    Gongfan FangXinyin Ma, and Xinchao Wang
    Advances in Neural Information Processing Systems, 2023

2022

  1. fang2022up.png
    AAAI’22
    Up to 100x Faster Data-free Knowledge Distillation
    Gongfan Fang, Kanya Mo, Xinchao Wang, Jie Song, Shitao Bei, Haofei Zhang, and Mingli Song
    Proceedings of the AAAI Conference on Artificial Intelligence, 2022
  2. ma2022prompting.png
    IJCAI’22
    Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt
    Xinyin MaXinchao WangGongfan Fang, Yongliang Shen, and Weiming Lu
    Proceedings of International Joint Conference on Artificial Intelligence, 2022
  3. zhang2022knowledge.png
    TIP’23
    Knowledge Amalgamation for Object Detection with Transformers
    Haofei Zhang, Feng Mao, Mengqi Xue, Gongfan Fang, Zunlei Feng, Jie Song, and Mingli Song
    IEEE Transactions on Image Processing, 2022

2021

  1. fang2021mosaicking.png
    NeurIPS’21
    Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data
    Gongfan Fang, Yifan Bao, Jie Song, Xinchao Wang, Donglin Xie, Chengchao Shen, and Mingli Song
    Advances in Neural Information Processing Systems, 2021
  2. fang2021contrastive.png
    IJCAI’21
    Contrastive Model Inversion for Data-free Knowledge Distillation
    Gongfan Fang, Jie Song, Xinchao Wang, Chengchao Shen, Xingen Wang, and Mingli Song
    Proceedings of International Joint Conference on Artificial Intelligence, 2021

2020

  1. ma2020adversarial.png
    EMNLP’20
    Adversarial Self-Supervised Data-Free Distillation for Text Classification
    Xinyin Ma, Yongliang Shen, Gongfan Fang, Chen Chen, Chenghao Jia, and Weiming Lu
    Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020

2019

  1. fang2019data.png
    Preprint’19
    Data-free Adversarial Distillation
    Gongfan Fang, Jie Song, Chengchao Shen, Xinchao Wang, Da Chen, and Mingli Song
    arXiv preprint arXiv:1912.11006, 2019