diff --git "a/2dE4T4oBgHgl3EQfagyV/content/tmp_files/load_file.txt" "b/2dE4T4oBgHgl3EQfagyV/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/2dE4T4oBgHgl3EQfagyV/content/tmp_files/load_file.txt" @@ -0,0 +1,2701 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf,len=2700 +page_content='Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks Xinsong Zhang 1 Yan Zeng 1 Jipeng Zhang 2 Hang Li 1 Abstract Foundation models or pre-trained models have substantially improved the performance of various language, vision, and vision-language understand- ing tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=' However, existing foundation models can only perform the best in one type of tasks, namely language, vision, or vision-language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=' It is still an open question whether it is possible to con- struct a foundation model performing the best for all the understanding tasks, which we call a gen- eral foundation model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=' In this paper, we propose a new general foundation model, X-FM (the X- Foundation Model).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=' X-FM has one language en- coder, one vision encoder, and one fusion encoder, as well as a new training method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=' The training method includes two new techniques for learning X-FM from text, image, and image-text pair data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=' One is to stop gradients from the vision-language training when learning the language encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=' The other is to leverage the vision-language training to guide the learning of the vision encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=' Exten- sive experiments on benchmark datasets show that X-FM can significantly outperform existing gen- eral foundation models and perform better than or comparable to existing foundation models specif- ically for language, vision, or vision-language understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=' Introduction With the enormous power of foundation models, also known as pre-trained models, remarkable performance gains have recently been achieved in a variety of understanding tasks in natural language processing (NLP), computer vision (CV), and other fields (Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=' Lewis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=' Raffel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=' Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=' Doso- vitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=' He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=' Bao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=' Lu 1ByteDance AI Lab 2The Hong Kong University of Science and Technology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf'} +page_content=' Correspondence to: Xinsong Zhang