diff --git "a/EdFRT4oBgHgl3EQfBDfd/content/tmp_files/load_file.txt" "b/EdFRT4oBgHgl3EQfBDfd/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/EdFRT4oBgHgl3EQfBDfd/content/tmp_files/load_file.txt" @@ -0,0 +1,2049 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf,len=2048 +page_content='TRAINING WITH MIXED-PRECISION FLOATING-POINT ASSIGNMENTS Wonyeol Lee 1 Rahul Sharma 2 Alex Aiken 1 ABSTRACT When training deep neural networks, keeping all tensors in high precision (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=', 32-bit or even 16-bit floats) is often wasteful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=' However, keeping all tensors in low precision (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=', 8-bit floats) can lead to unacceptable accuracy loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=' Hence, it is important to use a precision assignment—a mapping from all tensors (arising in training) to precision levels (high or low)—that keeps most of the tensors in low precision and leads to sufficiently accurate models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=' We provide a technique that explores this memory-accuracy tradeoff by generating precision assignments that (i) use less memory and (ii) lead to more accurate models at the same time, compared to the precision assignments considered by prior work in low-precision floating-point training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=' Our method typically provides > 2× memory reduction over a baseline precision assignment while preserving training accuracy, and gives further reductions by trading off accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=' Compared to other baselines which sometimes cause training to diverge, our method provides similar or better memory reduction while avoiding divergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=' 1 INTRODUCTION In deep neural network training, floating-point formats are usually used to represent tensors and it is worthwhile to use the smallest bitwidth format that gives acceptable results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=' For example, it is common to replace tensors using 32-bit floats with tensors that use 16-bit floats (Micikevicius et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=' Kalamkar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=' The benefits are easy to under- stand: computations using lower-precision floats not only use less memory but are also faster (due to improved vec- tor parallelism, locality, and reduced data movement).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=' The downside is that there is generally some loss of training accu- racy, and in the worst case training may not even converge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=' For such low-precision floating-point training, the most common approaches use two floating-point formats—one for lower-precision floats (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=', 8-bit floats) and the other for higher-precision floats (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=', 16-bit floats)—and assign one of the two formats to each tensor (including weights, activations, and their gradients).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=' The precision assignments studied in previous work fall into one of two assignment schemes (which both have several variants): the uniform assignment uses low precision for almost all tensors (often excepting those in the first and/or last few layers) (Micikevicius et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=', 2018), while the operator-based assignment limits low precision to the input tensors of certain operators (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=', convolutions) (Sun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=' Prior work has shown that both precision assignment schemes (with well-chosen low-bitwidth floating-point formats) can match the accuracy of 32-bit-float training 1Stanford University, USA 2Microsoft Research, India.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/EdFRT4oBgHgl3EQfBDfd/content/2301.13464v1.pdf'} +page_content=' Correspondence to: Wonyeol Lee