SCLIP: Rethinking Self-Attention for Dense Vision-Language Inference F Wang, J Mei, A Yuille ECCV 2024, 2023 | 48 | 2023 |
CP2: Copy-Paste Contrastive Pretraining for Semantic Segmentation F Wang, H Wang, C Wei, A Yuille, W Shen ECCV 2022, 2022 | 40 | 2022 |
Learning to Decompose Visual Veatures with Latent Textual Prompts F Wang, M Li, X Lin, H Lv, AG Schwing, H Ji ICLR 2023, 2022 | 27 | 2022 |
Mamba-R: Vision Mamba Also Needs Registers F Wang, J Wang, S Ren, G Wei, J Mei, W Shao, Y Zhou, A Yuille, C Xie arXiv preprint arXiv:2405.14858, 2024 | 17 | 2024 |
Boost Neural Networks by Checkpoints F Wang, G Wei, Q Liu, J Ou, H Lv NeurIPS 2021, 2021 | 10 | 2021 |
Autoregressive Pretraining with Mamba in Vision S Ren, X Li, H Tu, F Wang, F Shu, L Zhang, J Mei, L Yang, P Wang, ... arXiv preprint arXiv:2406.07537, 2024 | 7 | 2024 |
Dual prompt tuning for domain-aware federated learning G Wei, F Wang, A Shah, R Chellappa arXiv preprint arXiv:2310.03103, 2023 | 5 | 2023 |
AggEnhance: Aggregation Enhancement by Class Interior Points in Federated Learning with Non-IID Data J Ou, Y Shen, F Wang, Q Liu, X Zhang, H Lv ACM Transactions on Intelligent Systems and Technology (TIST) 13 (6), 1-25, 2022 | 5 | 2022 |
Gradient boosting forest: a two-stage ensemble method enabling federated learning of gbdts F Wang, J Ou, H Lv International Conference on Neural Information Processing, 75-86, 2021 | 5 | 2021 |
Causal Image Modeling for Efficient Visual Understanding F Wang, T Yang, Y Yu, S Ren, G Wei, A Wang, W Shao, Y Zhou, A Yuille, ... arXiv preprint arXiv:2410.07599, 2024 | 1 | 2024 |
Scaling Laws in Patchification: An Image Is Worth 50,176 Tokens And More F Wang, Y Yu, G Wei, W Shao, Y Zhou, A Yuille, C Xie arXiv preprint arXiv:2502.03738, 2025 | | 2025 |
M-VAR: Decoupled Scale-wise Autoregressive Modeling for High-Quality Image Generation S Ren, Y Yu, N Ruiz, F Wang, A Yuille, C Xie arXiv preprint arXiv:2411.10433, 2024 | | 2024 |