diff --git "a/B9E0T4oBgHgl3EQfyAKb/content/tmp_files/load_file.txt" "b/B9E0T4oBgHgl3EQfyAKb/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/B9E0T4oBgHgl3EQfyAKb/content/tmp_files/load_file.txt" @@ -0,0 +1,1880 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf,len=1879 +page_content='DOES COMPRESSING ACTIVATIONS HELP MODEL PARALLEL TRAINING?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' Song Bian * 1 Dacheng Li * 2 Hongyi Wang 2 Eric P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' Xing 2 3 4 Shivaram Venkataraman 1 ABSTRACT Large-scale Transformer models are known for their exceptional performance in a range of tasks, but training them can be difficult due to the requirement for communication-intensive model parallelism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' One way to improve training speed is to compress the message size in communication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' Previous approaches have primarily focused on compressing gradients in a data parallelism setting, but compression in a model-parallel setting is an understudied area.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' We have discovered that model parallelism has fundamentally different characteristics than data parallelism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' In this work, we present the first empirical study on the effectiveness of compression methods for model parallelism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' We implement and evaluate three common classes of compression algorithms - pruning-based, learning-based, and quantization-based - using a popular Transformer training framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' We evaluate these methods across more than 160 settings and 8 popular datasets, taking into account different hyperparameters, hardware, and both fine-tuning and pre-training stages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' We also provide analysis when the model is scaled up.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' Finally, we provide insights for future development of model parallelism compression algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' 1 INTRODUCTION Transformer models have become the dominant model for many machine learning tasks (Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' Dosovitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' Gong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' Sharir et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' Gong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' However, state-of-the-art Transformer models have a large number of parameters, making it difficult for a single GPU to hold the entire model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' As a result, training large Transformer models often requires partitioning the model parameters among multiple GPUs, a technique known as model paral- lelism (Shoeybi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' Rasley et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' Model parallelism strategies often introduce significant commu- nication overhead, as demonstrated in Figure 1 (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' For instance, the most commonly used tensor model parallelism strategy requires two all-reduce operations over a large tensor in each Transformer encoder block per iter- ation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' This can greatly increase the overall computational cost of training the model (Shoeybi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' To address the issue of high communication overhead in model parallelism, one approach is to compress the mes- sages communicated among GPUs, such as activation val- ues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' In the data-parallel setting, several prior works have explored compressing gradients to reduce the communica- tion cost of training (Seide et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=', 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' Bernstein et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' Dettmers, 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=', 2018b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content=' Equal contribution 1Department of Computer Science, Uni- versity of Wisconsin-Madison 2Machine Learning Department, Carnegie Mellon University 3MBZUAI 4Petuum Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfyAKb/content/2301.02654v1.pdf'} +page_content='. Correspon- dence to: Song Bian