Running 2.47k 2.47k The Ultra-Scale Playbook 🌌 The ultimate guide to training LLM on large GPU Clusters
Running on T4 1.08k 1.08k Open NotebookLM 🎙 Personalised Podcasts For All - Available in 13 Languages
Writing in the Margins: Better Inference Pattern for Long Context Retrieval Paper • 2408.14906 • Published Aug 27, 2024 • 143
GoldFinch: High Performance RWKV/Transformer Hybrid with Linear Pre-Fill and Extreme KV-Cache Compression Paper • 2407.12077 • Published Jul 16, 2024 • 57
GoldFinch: High Performance RWKV/Transformer Hybrid with Linear Pre-Fill and Extreme KV-Cache Compression Paper • 2407.12077 • Published Jul 16, 2024 • 57