Follow
Ramé Alexandre
Ramé Alexandre
Google DeepMind
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
DyTox: Transformers for Continual Learning with DYnamic TOken eXpansion
A Douillard, A Ramé, G Couairon, M Cord
CVPR 2022, 2021
3312021
Fishr: Invariant Gradient Variances for Out-of-Distribution Generalization
A Ramé, C Dancette, M Cord
ICML 2022, 2021
2212021
Gemma 2: Improving open language models at a practical size
G Team, M Riviere, S Pathak, PG Sessa, C Hardin, S Bhupatiraju, ...
arXiv preprint arXiv:2408.00118, 2024
2032024
Diverse Weight Averaging for Out-of-Distribution Generalization
A Ramé, M Kirchmeyer, T Rahier, A Rakotomamonjy, P Gallinari, M Cord
NeurIPS 2022, 2022
1182022
Leveraging weakly annotated data for fashion image retrieval and label prediction
C Corbiere, H Ben-Younes, A Ramé, C Ollion
ICCV 2017 Workshop, 2017
1182017
Rewarded soups: towards Pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards
A Ramé, G Couairon, M Shukor, C Dancette, JB Gaya, L Soulier, M Cord
NeurIPS 2023, 2023
922023
Direct Language Model Alignment from Online AI Feedback
S Guo, B Zhang, T Liu, T Liu, M Khalman, F Llinares, A Ramé, T Mesnard, ...
arXiv preprint arXiv:2402.04792, 2024
802024
Model Ratatouille: Recycling Diverse Models for Out-of-Distribution Generalization
A Ramé, K Ahuja, J Zhang, M Cord, L Bottou, D Lopez-Paz
ICML 2023, 2023
79*2023
MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks
A Ramé, R Sun, M Cord
ICCV 2021, 2021
762021
DICE: Diversity in Deep Ensembles via Conditional Redundancy Adversarial Estimation
A Ramé, M Cord
ICLR 2021, 2021
692021
WARM: On the Benefits of Weight Averaged Reward Models
A Ramé, N Vieillard, L Hussenot, R Dadashi, G Cideron, O Bachem, ...
ICML 2024, 2024
472024
Unified Model for Image, Video, Audio and Language Tasks
M Shukor, C Dancette, A Ramé, M Cord
TMLR 2023, 2023
28*2023
OMNIA Faster R-CNN: Detection in the wild through dataset merging and soft distillation
A Ramé, E Garreau, H Ben-Younes, C Ollion
arXiv preprint arXiv:1812.02611, 2018
162018
BOND: Aligning LLMs with Best-of-N Distillation
PG Sessa, R Dadashi, L Hussenot, J Ferret, N Vieillard, A Ramé, ...
arXiv preprint arXiv:2407.14622, 2024
152024
Beyond Task Performance: Evaluating and Reducing the Flaws of Large Multimodal Models with In-Context Learning
M Shukor, A Ramé, C Dancette, M Cord
ICLR 2024, 2023
122023
Towards efficient feature sharing in MIMO architectures
R Sun, A Ramé, C Masson, N Thome, M Cord
CVPR 2022 ECV Workshop, 2022
92022
Conditional Language Policy: A General Framework for Steerable Multi-Objective Finetuning
K Wang, R Kidambi, R Sullivan, A Agarwal, C Dann, A Michi, M Gelmi, ...
arXiv preprint arXiv:2407.15762, 2024
72024
WARP: On the Benefits of Weight Averaged Rewarded Policies
A Ramé, J Ferret, N Vieillard, R Dadashi, L Hussenot, PL Cedoz, ...
arXiv preprint arXiv:2406.16768, 2024
62024
CORE: Color Regression for Multiple Colors Fashion Garments
A Ramé, A Douillard, C Ollion
CVPR 2022 Workshop on Computer Vision for Fashion, Art, and Design, 2020
5*2020
Pre-train, fine-tune, interpolate: a three-stage strategy for domain generalization
A Ramé, J Zhang, L Bottou, D Lopez-Paz
NeurIPS 2022 Interpolation Workshop, 0
4*
The system can't perform the operation now. Try again later.
Articles 1–20