Follow
Tri Dao
Tri Dao
Stanford University, Princeton University
Verified email at stanford.edu - Homepage
Title
Cited by
Cited by
Year
Flashattention: Fast and memory-efficient exact attention with io-awareness
T Dao, D Fu, S Ermon, A Rudra, C Ré
Advances in Neural Information Processing Systems 35, 16344-16359, 2022
6582022
Starcoder: may the source be with you!
R Li, LB Allal, Y Zi, N Muennighoff, D Kocetkov, C Mou, M Marone, C Akiki, ...
arXiv preprint arXiv:2305.06161, 2023
350*2023
A kernel theory of modern data augmentation
T Dao, A Gu, A Ratner, V Smith, CD Sa, C Ré
Proceedings of the 36th International Conference on Machine Learning, ICML, 9-15, 2019
1952019
Hippo: Recurrent memory with optimal polynomial projections
A Gu, T Dao, S Ermon, A Rudra, C Ré
Advances in neural information processing systems 33, 1474-1487, 2020
1702020
Flashattention-2: Faster attention with better parallelism and work partitioning
T Dao
International Conference on Learning Representations, 2023
1572023
Combining recurrent, convolutional, and continuous-time models with linear state space layers
A Gu, I Johnson, K Goel, K Saab, T Dao, A Rudra, C Ré
Advances in neural information processing systems 34, 572-585, 2021
1302021
Mamba: Linear-time sequence modeling with selective state spaces
A Gu, T Dao
arXiv preprint arXiv:2312.00752, 2023
1292023
Hungry Hungry Hippos: Towards Language Modeling with State Space Models
DY Fu, T Dao, KK Saab, AW Thomas, A Rudra, C Re
The Eleventh International Conference on Learning Representations, 2023
1202023
Hyena Hierarchy: Towards Larger Convolutional Language Models
M Poli, S Massaroli, E Nguyen, DY Fu, T Dao, S Baccus, Y Bengio, ...
International Conference on Machine Learning, 2023
1072023
Learning fast algorithms for linear transforms using butterfly factorizations
T Dao, A Gu, M Eichhorn, A Rudra, C Ré
International conference on machine learning, 1517-1527, 2019
962019
Scatterbrain: Unifying sparse and low-rank attention
B Chen, T Dao, E Winsor, Z Song, A Rudra, C Ré
Advances in Neural Information Processing Systems 34, 17413-17426, 2021
73*2021
Mongoose: A learnable lsh framework for efficient neural network training
B Chen, Z Liu, B Peng, Z Xu, JL Li, T Dao, Z Song, A Shrivastava, C Re
International Conference on Learning Representations, 2020
692020
Deja vu: Contextual sparsity for efficient llms at inference time
Z Liu, J Wang, T Dao, T Zhou, B Yuan, Z Song, A Shrivastava, C Zhang, ...
International Conference on Machine Learning, 22137-22176, 2023
632023
Gaussian quadrature for kernel features
T Dao, CM De Sa, C Ré
Advances in neural information processing systems 30, 2017
572017
Monarch: Expressive structured matrices for efficient and accurate training
T Dao, B Chen, NS Sohoni, A Desai, M Poli, J Grogan, A Liu, A Rao, ...
International Conference on Machine Learning, 4690-4721, 2022
542022
Pixelated butterfly: Simple and efficient sparse training for neural network models
T Dao, B Chen, K Liang, J Yang, Z Song, A Rudra, C Re
International Conference on Learning Representations, 2021
502021
Learning compressed transforms with low displacement rank
A Thomas, A Gu, T Dao, A Rudra, C Ré
Advances in neural information processing systems 31, 2018
482018
Kaleidoscope: An efficient, learnable representation for all structured linear maps
T Dao, NS Sohoni, A Gu, M Eichhorn, A Blonder, M Leszczynski, A Rudra, ...
International Conference on Learning Representations, 2020
472020
S4nd: Modeling images and videos as multidimensional signals with state spaces
E Nguyen, K Goel, A Gu, G Downs, P Shah, T Dao, S Baccus, C Ré
Advances in neural information processing systems 35, 2846-2861, 2022
442022
Decentralized training of foundation models in heterogeneous environments
B Yuan, Y He, J Davis, T Zhang, T Dao, B Chen, PS Liang, C Re, C Zhang
Advances in Neural Information Processing Systems 35, 25464-25477, 2022
432022
The system can't perform the operation now. Try again later.
Articles 1–20