Follow
Gongfan Fang
Title
Cited by
Cited by
Year
LLM-Pruner: On the Structural Pruning of Large Language Models
X Ma, G Fang, X Wang
NeurIPS 2023, 2023
4862023
DepGraph: Towards Any Structural Pruning
G Fang, X Ma, M Song, MB Mi, X Wang
CVPR 2023, 2023
3602023
Structural Pruning for Diffusion Models
G Fang, X Ma, X Wang
Neural Information Processing Systems (NeurIPS), 2023
196*2023
Data-free adversarial distillation
G Fang, J Song, C Shen, X Wang, D Chen, M Song
arXiv preprint arXiv:1912.11006, 2019
1732019
Deepcache: Accelerating diffusion models for free
X Ma, G Fang, X Wang
CVPR 2024, 2024
1022024
Contrastive Model Inversion for Data-Free Knowledge Distillation
G Fang, J Song, X Wang, C Shen, X Wang, M Song
IJCAI 2021, 2021
962021
Up to 100 Faster Data-free Knowledge Distillation
G Fang, K Mo, X Wang, J Song, S Bei, H Zhang, M Song
AAAI 2022, 2021
82*2021
Knowledge amalgamation from heterogeneous networks by common feature learning
S Luo, X Wang, G Fang, Y Hu, D Tao, M Song
IJCAI 2019, 2019
572019
Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data
G Fang, Y Bao, J Song, X Wang, D Xie, C Shen, M Song
NeurIPS 2021, 2021
432021
Adversarial Self-Supervised Data-Free Distillation for Text Classification
X Ma, Y Shen, G Fang, C Chen, C Jia, W Lu
EMNLP 2020, 2020
222020
Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching
X Ma, G Fang, MB Mi, X Wang
NeurIPS 2024, 2024
172024
Knowledge Amalgamation for Object Detection with Transformers
H Zhang, F Mao, M Xue, G Fang, Z Feng, J Song, M Song
IEEE Transactions on Image Processing, 2022
152022
MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models
G Fang, H Yin, S Muralidharan, G Heinrich, J Pool, J Kautz, P Molchanov, ...
NeurIPS 2024, 2024
102024
0.1% Data Makes Segment Anything Slim
Z Chen, G Fang, X Ma, X Wang
NeurIPS 2024, 2023
92023
Depgraph: Towards any structural pruning. The IEEE
G Fang, X Ma, M Song, MB Mi, X Wang
CVF Conference on Computer Vision and Pattern Recognition, 2023
72023
Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt
X Ma, X Wang, G Fang, Y Shen, W Lu
IJCAI 2022, 2022
72022
Torch-Pruning
G Fang
https://github.com/VainF/Torch-Pruning, 2019
72019
Deeplabv3plus-pytorch
G Fang
https://github.com/VainF/DeepLabV3Plus-Pytorch, 2019
72019
AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising
Z Chen, X Ma, G Fang, Z Tan, X Wang
NeurIPS 2024, 2024
52024
Pytorch ms-ssim
G Fang
5*2019
The system can't perform the operation now. Try again later.
Articles 1–20