-
Attention Is All You Need
Paper • 1706.03762 • Published • 106 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 24 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 9 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 21
Taufiq Dwi Purnomo
taufiqdp
·
AI & ML interests
SLM, VLM
Recent Activity
liked
a model
6 days ago
google/functiongemma-270m-it
upvoted
an
article
6 days ago
Tokenization in Transformers v5: Simpler, Clearer, and More Modular
liked
a model
8 days ago
apple/Sharp