Distributed Training Papers and resources related to distributed training. PyTorch FSDP: Experiences on Scaling Fully Sharded Data Parallel Paper • 2304.11277 • Published Apr 21, 2023 • 1 Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism Paper • 1909.08053 • Published Sep 17, 2019 • 2 Reducing Activation Recomputation in Large Transformer Models Paper • 2205.05198 • Published May 10, 2022 GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism Paper • 1811.06965 • Published Nov 16, 2018
PyTorch FSDP: Experiences on Scaling Fully Sharded Data Parallel Paper • 2304.11277 • Published Apr 21, 2023 • 1
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism Paper • 1909.08053 • Published Sep 17, 2019 • 2
Reducing Activation Recomputation in Large Transformer Models Paper • 2205.05198 • Published May 10, 2022
GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism Paper • 1811.06965 • Published Nov 16, 2018
Distributed Training Papers and resources related to distributed training. PyTorch FSDP: Experiences on Scaling Fully Sharded Data Parallel Paper • 2304.11277 • Published Apr 21, 2023 • 1 Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism Paper • 1909.08053 • Published Sep 17, 2019 • 2 Reducing Activation Recomputation in Large Transformer Models Paper • 2205.05198 • Published May 10, 2022 GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism Paper • 1811.06965 • Published Nov 16, 2018
PyTorch FSDP: Experiences on Scaling Fully Sharded Data Parallel Paper • 2304.11277 • Published Apr 21, 2023 • 1
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism Paper • 1909.08053 • Published Sep 17, 2019 • 2
Reducing Activation Recomputation in Large Transformer Models Paper • 2205.05198 • Published May 10, 2022
GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism Paper • 1811.06965 • Published Nov 16, 2018
michaelbenayoun/deepseekv3-tiny-4kv-heads-4-layers-random Text Generation • 0.0B • Updated 1 day ago • 44
michaelbenayoun/granite-tiny-4kv-heads-4layers-random Text Generation • 0.0B • Updated Jun 18 • 3.16k
michaelbenayoun/llama-2-tiny-4kv-heads-2layers-random Feature Extraction • 0.0B • Updated May 7, 2024 • 4