diff --git "a/0tFKT4oBgHgl3EQfOC19/content/tmp_files/load_file.txt" "b/0tFKT4oBgHgl3EQfOC19/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/0tFKT4oBgHgl3EQfOC19/content/tmp_files/load_file.txt" @@ -0,0 +1,1490 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf,len=1489 +page_content='Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion Flavio Schneider 1 Zhijing Jin 1 2 Bernhard Schölkopf 2 Abstract The recent surge in popularity of diffusion mod- els for image generation has brought new atten- tion to the potential of these models in other ar- eas of media synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' One area that has yet to be fully explored is the application of diffusion models to music generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Music generation requires to handle multiple aspects, including the temporal dimension, long-term structure, multi- ple layers of overlapping sounds, and nuances that only trained listeners can detect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In our work, we investigate the potential of diffusion models for text-conditional music generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We develop a cascading latent diffusion approach that can gen- erate multiple minutes of high-quality stereo mu- sic at 48kHz from textual descriptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' For each model, we make an effort to maintain reasonable inference speed, targeting real-time on a single consumer GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In addition to trained models, we provide a collection of open-source libraries with the hope of facilitating future work in the field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Introduction Music generation, or more generally audio generation, has multiple aspects at different levels of abstraction that make it a challenging problem (van den Oord et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dieleman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Regardless of its challenging nature, automated or model-assisted music generation has been an active area of research (Doornbusch, 2010;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Salas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Giraudo, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Recently, with the rise of deep learning models and their suc- cess in computer vision (Deng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2009;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Rombach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Chang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2023) and natural language process- ing (Pennington et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ouyang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022), it is also promising to see how much benefit deep learning models can bring to 1ETH Zürich, Switzerland 2Max Planck Institute for Intelli- gent Systems, Tübingen, Germany.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Correspondence to: Flavio Schneider <flavio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='schneider.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='97@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='com>.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1We open-source the following: – Music samples for this paper: bit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='ly/anonymous-mousai – All music samples for all models: bit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='ly/audio-diffusion – Codes: github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='com/archinetai/audio-diffusion-pytorch UNet1 Tokenizer UNet1 UNet1 Text Description Noise Noise Audio Embedding Latent Transformer UNet2 UNet2 UNet2 UNet2 DiffusionDecoder DiffusionGenerator TextEncoder Egyptian Darbuka, Drums, Rythm, (Deluxe Edition), 2 of 4 Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Two-stage generation architecture in the inference mode of our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Specifically, we first encode text with a pretrained and frozen language model into a text embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Then, condition- ing on the text, we generate a compressed latent with the diffusion generator, and finally, the compressed latent in turn is used to condition the diffusion decoder to generate the final waveform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' audio generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Existing audio generation models explore the use of recursive neural networks (Mehri et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2017), adversarial generative networks (Kumar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kim et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Engel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Morrison et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022), au- toencoders (Deng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2021), and transformers (Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' As the more recent advancement in generative mod- els, diffusion models have been used in speech synthesis (Kong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Lam et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Leng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022), but are still under-explored for music generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Moreover, there are several long-standing challenges in the area of music generation: (1) modeling the long-term struc- ture, (2) improving the sound quality, (3) increasing the diversity of the generated music, and (4) enabling easier control of the generation, such as text prompts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' A single model mastering all the proposed aspects would be a great addition to the music industry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' It can enable the broader public to be part of the creative process by allowing them to compose music using an accessible text-based interface, as- arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='11757v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='CL] 27 Jan 2023 Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Comparison of our Moûsai model with previous music generation models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We show the comparisons along the (1) audio sample rate@the number of channels (Sample Rate↑, where the higher the better), (2) context length of the generated music (Ctx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Len.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='↑, where the higher the more capable the model is to generate structural music;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' we use ⋆ to indicate variable length, and we assume that autoregressive methods are variable by default, but have an upper-bound imposed by attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ), (3) input type (Input, where we feature using Text \x13 as the condition for the generation), (4) type of the generate music (Music, where the more Diverse↑ genre, the better), (5) an example of the generated music type (Example), (6) inference time (Infer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Time↓, where the shorter the better, and since the music length is seconds or minutes, the inference time equivalent to the audio length is the shortest, and we use ⋆ to show models that can run inference fast on CPU), and (7) total length of the music in the training data in hours (Data).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Model Sample Rate↑ Ctx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Len.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='↑ Input (Text \x13) Music (Diverse↑) Example Infer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Time↓ Data WaveNet (2016) 16kHz@1 Secs None Piano or speech Piano = Audio len.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='⋆ 260 Jukebox (2020) 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1kHz@1 Mins⋆ Lyrics, author, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Song with the lyrics Song Hours 70K RAVE (2021) 48kHz@2 Secs⋆ Latent Single-genre Music Strings = Audio len.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='⋆ 100 AudioLM (2022) 16kHz@1 Secs⋆ Beginning of the music Piano or speech Piano Mins 40K Musika (2022) 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='5kHz@2 Secs Context vector Single-genre Music Piano = Audio len.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='⋆ 1K Riffusion (2022) 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1kHz@1 5s Text (genre, author, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=') Music of any genre Jazzy clarinet Mins – AudioGen (2022) 16kHz@1 Secs⋆ Text (a phrase/sentence) Daily sounds Dog barks Hours 4K Moûsai (Ours) 48kHz@2 Mins⋆ Text (genre, author, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=') Music of any genre African drums = Audio len.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='5K sist creators in finding inspiration, and provide an unlimited supply of novel audio samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' From the landscape of existing music generation models in Table 1, we can see that the aforementioned challenges widely exist throughout the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' For example, most text-to-audio systems (Forsgren & Martiros, 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kreuk et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022) can only generate a few seconds of audio, and many tend to require long inference time up to many GPU hours to generate one minute of audio (Dhariwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kreuk et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Apart from the text-to-music generation models, if we look at the unconditional music generation, some can generate high-quality samples and run in real time on CPU (Caillon & Esling, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pasini & Schlüter, 2022), but they are usually trained on a single modality (resulting in the ability to handle only single-genre music, but not diverse ones), and none can handle long-term structure (van den Oord et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Caillon & Esling, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pasini & Schlüter, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' To this end, we propose Moûsai,2 a text-conditional cascad- ing diffusion model (Figure 1) that tries to address all the mentioned challenges at the same time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Specifically, our Moûsai model uses a custom two-stage cascading diffusion method shown in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In the first stage, it compresses the audio waveform using a novel diffusion autoencoder, and in the second stage, it learns to generate the reduced 2Moûsai is romanized ancient Greek for Muses, the sources of artistic inspiration (https://en.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='wikipedia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/wiki/ Muses).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Given that inspiration is exactly what the system may be lacking, this name may not be apposite, but the reminiscence to both music and AI was simply too compelling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' latent representations conditioned on the text embedding generated by a pretrained language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Both stages use an efficient U-Net optimized by us, enabling fast inference speed which makes it realistic for usage in future applica- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In conclusion, the main contributions of our work are as follows: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We make it possible to generate long-context 48kHz stereo music exceeding the minute mark, based on context exceeding the minute mark, and generate a variety of music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We propose an efficient 1D U-Net architecture for both stages of the cascade, making it possible to generate audio in real-time on a single consumer GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Likewise, each stage of our system can be trained on one A100 GPU in approximately 1 week, making it possible to train and run the overall system using modest resources, as available in most universities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We present a new diffusion magnitude autoencoder that can compress the audio signal 64x compared to the original waveform with only moderate quality loss, used by the generation stage of the architecture to apply latent diffusion on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Related Work A common trend in the generative space has been to first train a representation learning, compression, or upsampling model on the input domain, and later learn a generative Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion model on top of the reduced representation while condition- ing on the information of interest (Rombach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kreuk et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ho et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ville- gas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' This can be drastically more efficient than directly learning on the raw input data, as the generative model can work on a much lower dimensional representa- tion and hence capture coarse structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Auto-encoding (Hinton & Salakhutdinov, 2006;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kingma & Welling, 2014) or quantized auto-encoding (van den Oord et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Esser et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022) are popu- lar compression methods originally proposed for the image domain, that have been similarly and successfully applied as audio representations (Caillon & Esling, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pasini & Schlüter, 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Baevski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Zeghidour et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Défossez et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The two most popular directions in the generative space suggest either to learn a quantized rep- resentation followed by masked or autoregressive learning on tokens (Villegas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Chang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2023;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dhariwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Borsos et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kreuk et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022), or to use learned (continous) compressed or deterministic downsampled representation and later apply diffusion models as generators to reconstruct the noise-masked data in another stage (Ramesh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Rombach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Saharia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ho et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Forsgren & Martiros, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Methods using the former to- kenized representation have been successful but not up to the same level of performance as the latter (“cascading") diffusion methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In our work, we follow ideas from the cascading diffusion approach, which, to the best of our knowledge, has never been attempted for audio generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We use a custom two-stage cascading diffusion method, where the first stage compresses audio using a novel diffusion autoencoder, and the second stage learns to generate the reduced representa- tion while conditioning on a textual description.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Preliminaries In this section, we introduce several preliminaries that serve as the basis for our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Specifically, we give an overview of the workings of diffusion, latent diffusion, and the U-Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Audio Generation Audio generation has long been a challenging task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' At the lowest level, we have digital waveforms that control air movement from speakers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Waveforms can be represented in different resolutions, or sample rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Higher sample rates (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 48kHz)allow for more temporal resolution and can represent higher frequencies, but at the same time it is com- putationally more demanding to generate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' At higher levels of abstraction, we find qualitative properties such as texture (timbre) or pitch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Zooming out, we observe structure such as rhythm and melody that can span multiple seconds, or even structurally be composed into choruses that form min- utes of interconnected patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Audio can be represented with a single waveform (mono), two waveforms (stereo), or even more in the case of surround sound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Audio with two or more channels can give a sense of movement and spatialisation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' From the modelling perspective, there are unconditional models that generate novel samples from the training distribution without any additional information, or conditional models that use a form of guidance, such as text, to control the generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Models can be trained on a single modality (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', drums or piano) or on multiple modalities, which usually require more parameters for an increased modelling capacity and decrease in speed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Diffusion We employ vvv-objective diffusion as proposed by Salimans & Ho (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Given a sample xxx0 from a distribution p(xxx0), some noise schedule σt ∈ [0, 1], and some noisy data-point xxxσt = ασtxxx0 + βσtϵϵϵ, vvv-objective diffusion tries to esti- mate a model ˆvvvσt = f(xxxσt, σt) minimizing the following objective: Et∼[0,1],σt,xxxσt î ∥fθ(xxxσt, σt) − vvvσt∥2 2 ó , (1) where vvvσt = ∂xxxσt σt = ασtϵϵϵ − βσtxxxσt with ασt .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='.= cos(φt), and βσt .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='.= sin(φt) and φt .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='.= π 2 σt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' By estimating the rate of change, ODE samplers can be used to turn noise into a new datapoint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In this work, we use the DDIM sampler (Song et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2021), which we find to work well and have a reasonable tradeoff between the number of steps and audio quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The DDIM sampler denoises the signal by repeated application of the following: ˆvvvσt = fθ(xxxσt, σt) (2) ˆxxx0 = ασtxxxσt − βσtˆvvvσt (3) ˆϵϵϵσt = βσtxxxσt + ασtˆvvvσt (4) ˆxxxσt−1 = ασt−1ˆxxx0 + βσt−1ˆϵϵϵt, (5) which estimates both the initial data-point and the noise at step σt, for some T-step noise schedule σT , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' , σ0 linearly spaced between 1 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Latent Diffusion Following the work on image diffusion (Rombach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022), we compress audio into a smaller representation and apply the diffusion process on the reduced latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In contrast to Rombach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (2022), we propose a diffusion based autoencoder instead of a standard autoencoder, in- creasing the representation power of the decoding process and the amount of compressibility allowed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion Downsample Upsample Items Skip UNetBlock Items Items ×N R C A M I Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1D U-Net architecture used both for the diffusion decoder and latent diffusion generator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The inner dashed region indicates that the UNetBlock can be recursively nested.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Resnet items (R) are used as convolutional blocks, modulation items (M) are used to provide the diffusion noise level as a feature vector conditioning , inject items (I) are used to inject external channels as conditioning (used for diffusion decoding only), attention items (A) are used to share information timewise, and cross attention items (C) are used to condition on an external (text) embeddings .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' U-Net U-Nets were first proposed by Ronneberger et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (2015) as an hourglass convolutional only 2D architecture with skip connections;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' originally used for medial image segmentation, and since repurposed for multiple uses, such as image, au- dio, and video generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Our proposed U-Net has little resemblance to the original work, and is infused with multi- ple new components, such as more modern convolutional blocks, a variety of attention blocks, conditioning blocks, and improved skip connections, maintaining only a skeleton of the hourglass architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Text-to-Music Generation with Moûsai Moûsai is composed of two independently trained models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The first stage (DMAE) is responsible for compressing the audio waveform 64x using a diffusion autoencoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In the second stage (latent text-to-audio diffusion), we generate a novel latent space by the diffusion model while conditioning on text embeddings obtained from a frozen transformer language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' For both diffusion models, we use the same efficient 1D U-Net architecture with varying configurations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1D U-Net In this work, we use a 1D U-Net architecture employed in different configurations for both the autoencoding and latent diffusion stage (Figure 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' U-Nets with 1D convolutional kernels are more efficient compared to 2D in terms of speed and can be successfully used both on waveforms or on UNet ||·|| Noise Encoder STFTMag Latent Audio Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Diffusion Magnitude Autoencoder (DMAE) training scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The diffusion autoencoder stage learns to compress au- dio 64x (compared to the original waveform) into a smaller latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' To train this stage, the waveform is first converted to a magnitude spectrogram, then auto-encoded into a latent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' At the same time, the original audio is corrupted with a random amount of noise and the U-Net is trained to remove that noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' During the noise removal process, the U-Net is conditioned on the noise level and the compressed latent which can have access to a reduced version of the non-noisy audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' spectrograms if each frequency is considered as a different channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We use a variety of repeated items at each resolution of the U-Net, namely: (R) a residual 1D convolutional unit, (M) a modulation unit used to alter the channels given features from the diffusion noise level, (I) an inject item that con- catenates external channels to the ones at the current depth (the lengths must match), (A) an attention item used to share long-context structural information, and (C) a cross atten- tion item used to condition on text embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Inject items are applied only at a specific depth in the first stage decoder to condition on the latent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Attention and cross attention items are instead used only in the inner blocks of the second stage U-Net, to learn structure and condition on text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Diffusion Magnitude-Autoencoding (DMAE) Diffusion autoencoders were first introduced by Preechakul et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (2022), as a way to condition the diffusion process on a compressed latent vector of the input itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Diffusion can act as a more powerful generative decoder, and hence the in- put can be reduced to latents with higher compression ratios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In this work, we propose a new diffusion autoencoder that 500 0 400 20 300 40 200 60 100 80 00 200 400 600 800 1000Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion first encodes a magnitude spectrogram into a compressed representation, and later injects the latent into intermediate channels of the decoding 1D U-Net (Figure 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Let www be a waveform of shape [c, t] for c channels and t timesteps, and (m m mw ww,pppww w) = stft(www;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' n = 1024, h = 256) be the magnitude and phase obtained from a short-time furier tranform of the waveform with a window size of 1024 and hop-length of 256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Then the resulting spectro- grams will have shape [c · n, t h].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We discard phase and encode the magnitude into a latent zzz = encθenc(m m mw ww) us- ing a 1D convolutional encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The original waveform is then reconstructed by decoding the latent using a diffusion model ˆwww = decθdec(zzz,ϵϵϵ, s), where decθdec is the diffusion sampling process with starting noise ϵϵϵ and s is the num- ber of decoding (sampling) steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The decoder is trained with vvv-objective diffusion while conditioning on the latent fθdec(wwwσt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' σt,zzz), where fθdec is the proposed 1D U-Net, called repeatedly during decoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Since only the magnitude is used and phase is discarded, this diffusion autoencoder is simultaneously a compressing autoencoder and vocoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' By using the magnitude spec- trograms, higher compression ratios can be obtained than autoencoding directly the waveform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We found that wave- forms are less compressible and efficient to work with.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Sim- ilarly, discarding phase is benificial to obtain higher com- pression ratios for the same level of quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The diffusion model can easily learn to generate a waveform with realistic phase even if conditioned only on the encoded magnitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Depending on the desired speed/quality tradeoff, more or less compression can be applied in this first stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Following our single GPU constraint, we find that 64x compression factor is a good balance to make sure the second stage can work on a reduced representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The latent space produced is then used as a starting point for the next diffusion stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' To make sure that the reduced latent space can be used for latent diffusion, we apply a tanh function on the bottleneck, keeping the values in the range [−1, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' A more disentangled bottleneck, such as the one used in VAEs (Kingma & Welling, 2014) can be used, but the additional regularization reduces the amount of allowed compressibility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Latent Text-to-Audio Diffusion The second stage applies latent diffusion on the previously obtained compressed space (Figure 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Similarly to the pre- vious stage we use vvv-objective diffusion with the 1D U-Net architecture and a different configuration fθgen(zzzσt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' σt,eee) while conditoning on the text embedding eee to generate the compressed latent zzz = encθenc(m m mw w w).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The generation func- tion ˆzzz = genθgen(eee,ϵϵϵ, s) uses again DDIM sampling and calls the U-Net s times to generate an approximate latent ˆzzz UNet ||·|| Noise Text Embedding Embedding Transformer Latent Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Text-conditional latent diffusion generator training scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' This stage is trained to generate novel latent spaces that follow a similar distribution to the ones generated by the au- toencoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The audio source is first encoded into the latent using the encoder, then the latent is corrupted with a random amount of noise, and the U-Net is trained to remove the noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' While the U-Net denoises the signal, the noise level is provided as a feature vector , and an encoded textual description of the original wave- form is provided as an embedding encoded with a frozen language model .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' from the text embedding eee and starting noise ϵϵϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The final generation stack during inference to obtain a waveform is ˆwww = decθdec(genθgen(eee,ϵϵϵgen, sgen),ϵϵϵdec, sdec) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (6) The 1D U-Net used in this stage includes cross attention blocks to provide the conditioning text embedding and mul- tiple attention blocks to make sure information can be shared over the entire latent, crucial to learn long-range audio struc- ture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Given the compressed size of the latent space, the size of this inner U-Net can be greatly increased compared to the first stage, maintaining a reasonable training and inference speed, even with large parameter counts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Text Conditioning To obtain the text embeddings, prior work on text- conditioning suggests either learning a joint data-text rep- resentation (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Elizalde et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ramesh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022) or using embeddings from pre-trained language model as direct conditioning (Saharia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ho et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022) of the latent model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In our model, we follow the practice in Saharia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (2022) to use a pre-trained and frozen T5 language model (Raffel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2020) to generate text embeddings from the given description.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We use the classifier-free guidance (CFG) (Ho Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion Example Text Prompts in Our Dataset Nr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 415 (Premium Edition), german hip hop, 2 of 7, 2012, XATAR, Konnekt 30 Años de Exitos, Mundanzas, 2 of 6, latin pop, Lupita D’Alessio, 2011 emo rap 2018 Runaway Lil Peep 4 of 5 Alone, Pt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' II (Remixes) 2020 electro house Alone, Pt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' II - Da Tweekaz Remix Alan Walker Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Example text prompts in our dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' & Salimans, 2022) with a learned mask applied on batch elements with a probability of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1 to improve the strength of the text-embedding during inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Experimental Setup For the experimental setup, we first give an high-level overview of the dataset and the training setup in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1, and then we dive into details of the implementation in Sec- tion 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2 and hardware requirements in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dataset and Training Setup We train all the models on a (relatively modest) collection that we compiled consisting of 2,500 hours of stereo music sampled at 48kHz spanning multiple genres, artists, instru- ments, and provenience in order to maintain a high diversity dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The autoencoder is trained on random crops of length 218 (∼5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='5s at 48kHz) and the text-conditional diffu- sion generation model is trained on fixed crops of length 221 (∼44s at 48kHz) encoded in the 32-channels, 64x com- pressed latent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' For the textual description, we use metadata such as the title, author, album, genre, and year of release.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Given that a song could span longer than 44s, we append a string indicating which chunk is currently being trained on, together with the total chunks the song is made of (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 1 of 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' This allows to select the region of interest during inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Hence, an example prompt is like “Egyptian Darbuka, Drums, Rythm, (Deluxe Edition), 2 of 4.” To make the conditioning more robust, we shuffle the list of metadata and drop each element with a probability of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Furthermore, for 50% of the times we concatenate the list with spaces and the other 50% of the times we use commas to make the interface more robust during inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Some example prompts in our dataset can be seen in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Implementation Details We train a 185M-parameter diffusion autoencoder with 7 nested U-Net blocks of increasing channel count ([256, 512, 512, 512, 1024, 1024, 1024]) and downsample each time by 2, except for the first block ([1, 2, 2, 2, 2, 2, 2]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The diffusion autoencoder only uses resnet and modulation items with the following repetitions [1, 2, 2, 2, 2, 2, 2], atten- tion is not used to allow decoding of variable and possibly very long latents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Channel injection only happens at depth 4, which matches the output of the magnitude encoder la- tent, post tanh application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Furthermore,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' we train a 857M text-conditional generator (including the parameters of the frozen T5-base model) with 6 nested U-Net blocks of in- creasing channel counts ([128,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 256,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 512,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 512,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1024,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1024]) and again downsample each time by 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' except for the first block ([1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2]),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' we use attention blocks at the fol- lowing depths [0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' skipping the first two blocks to allow for further downsampling before sharing informa- tion over the entire latent,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' instead use cross attention blocks at all resolutions ([1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' For both attention and cross attention, we use 64 head features and 12 heads per layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We repeat items with an increasing count towards the inner U-Net low-resolution and large-context blocks ([2, 2, 2, 4, 8, 8]), this allows good structural learning over minutes of audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Both models are trained with the AdamW optimizer (Loshchilov & Hutter, 2019) using a learning rate of 10−4, β1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='95, β2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='999, ϵ = 10−6, and wight decay of 10−3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Moreover, we use an exponential moving average (EMA) with β = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='995 and power of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Hardware Requirements We use limited computational resources as available in a university lab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Both models can be trained on a single A100 GPU in 1 week of training using a batch size of 32;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' this is equivalent to around 1M steps for both the diffusion autoen- coder and latent generator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' For inference, as an example, a novel audio source of ∼88s can be synthesized less than ∼88s using a consumer GPU with a DDIM sampler and a high step count (100 generation steps and 100 decoding steps).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Results As mentioned in Table 1, our model is the only model that generates long-context music from text descriptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Most other models do not take text as input (van den Oord et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Caillon & Esling, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Borsos et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pasini & Schlüter, 2022), and some others use lyrics or descriptions of daily sounds (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', “a dog barking”) (Kreuk et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dhariwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The only text-to-music model com- parable with our work is the Riffusion model (Forsgren & Martiros, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We describe the merits of our model in both quantitative and qualitative ways from multiple perspectives: (1) genre diver- sity, (2) relevance of the music to the given text prompt, (3) sound quality, and (4) long-term structure in the generated music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Our analyses are reported in Sections 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1 to 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Note that there is no perfect evaluation metric in the existing Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion literature (Kreuk et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Borsos et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dhariwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2020), since music is a complex artifact with a range of properties (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', timbre, rhythm, and structure), not to mention the subjectivity of music perception.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In the present work, we try our best to provide a diverse set of angles to evaluate the generated music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In addition, we suggest readers listen to the provided samples in order to gain a more holistic impression of our model compared to the Riffusion model (Forsgren & Martiros, 2022): bit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='ly/ anonymous-mousai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Diversity & Text-to-Music Relevance We design a listener test to illustrate the diversity and text relevance of Moûsai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Specifically, we compose a list of 40 text prompts spanning across several common music genres: electronic, hip hop, metal, and pop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (See Appendix A for the entire list of prompts, ten per category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=') Using these prompts, we generate music with both Moûsai and the Riffusion model (Forsgren & Martiros, 2022), with a total of 80 pieces of music, two for each prompt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Quali- tatively, we observe that our music samples exhibit a good diversity and fit the text descriptions well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' To validate this quantitatively, we conducted a small-scale psychophysics evaluation, recruiting three perceivers (anno- tators) with diverse demographic backgrounds (both female and male, all with at least a Master’s degree of education).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Each annotator listens to all 80 music samples we provide, and is instructed to categorize each sample into exactly one of the four provided genres.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' This is a four-alternative forced choice paradigm, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', a variant of the two-alternative forced choice setting which is considered the gold standard in psychophysics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We record how many times the perceiver correctly identifies the genre which the respective model was generating from.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' A large number (or score) means that the model often gener- ated music that, according to the human perceiver, plausibly belonged to the correct category (when compared to the other three categories).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' To achieve a good score, the model needs to generate diverse and genre-specific music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We take the score as a quality score of the model when it comes to correctly performing text-conditional music generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Figure 5, we display the confusion matrix of this genre identification test for both our model (left) and the Riffusion model (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' For our model, the annotators identify the right genres most of the time, whereas for the Riffusion model, the annotators often perceive the music as more generic, categorizing it as Pop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Sound Quality Apart from the diversity and relevance, we also evaluate the sound quality of the music we generate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' From the mel ElectronicHip Hop Metal Pop Electronic Hip Hop Metal Pop 0 5 10 15 20 25 30 (a) Confusion matrix for the music pieces generated by Moûsai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ElectronicHip Hop Metal Pop Electronic Hip Hop Metal Pop 0 5 10 15 20 25 30 (b) Confusion matrix for the music pieces generated by the Riffusion model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Evaluation results of genre categorization for our model (left) and the Riffusion model (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We show the confusion matrix across the four common music genres (electronic, hip hop, metal, and pop).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dark values on the diagonal mean that a model generates music the perceivers categorize into the correct genre.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We can see that our model (left) has most mass on the diagnal, while the riffusion model tends to generate generic samples that are very similar to Pop for all genres, thus being difficult to be categorized correctly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Note that each matrix adds up to 120, corre- sponding to 40 samples per model annotated by three perceivers each.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' spectrograms we visualize in Figure 6, we can see that low- frequency sounds are handled rather well by our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' From the music samples we provide, it is apparent that our model performs well with drum-like sounds as frequently found in electronic, house, dubstep, techno, EDM, and metal music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' This is likely a consequence of the lower amount of information required to represent low-frequency sounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Structure Another qualitative advantage of our model is its capability to handle long-term structure, as opposed to riffusion mod- els’ context length of 5 seconds, as mentioned in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Our generated samples exhibit structure over longer periods of time, exceeding the minute mark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' All of rhythm, loops, riffs, and occasionally even entire choruses are found in generated music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We find that increasing the number of at- tention blocks (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', from a total of 4–8 to a total of 32+) in the latent diffusion model can improve the general structure of the songs, thanks to the long-context view.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' If the model is trained without attention blocks, the context provided by the U-Net is not large enough to learn any meaningful long-term structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Additional Properties In addition to the main evaluation results, we also explore several properties of our model, namely the trade-off be- tween speed and quality, between the compression ratio and quality, as well as the text-audio binding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Trade-Off between Speed and Quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We find that 10 Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Mel spectrogram comparison between the true samples (top) and the auto-encoded samples (bottom);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' sampling steps in both stages can be enough to generate reasonable audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We can achieve improved quality and reduced noise for high-frequency sounds by trading off the speed, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', increasing the number of sampling steps in the diffusion decoder, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 50 – 100 steps).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Increasing the num- ber of sampling steps in the latent diffusion model (again in the order of 50 – 100 steps) will similarly improve the quality, likely due to the more detailed generated latents, and at the same time result in an overall better structured music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' To make sure the results are comparable when vary- ing the number of sampling steps, we use the same starting noise in both stages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In both cases, this suggests that using more advanced samplers could be helpful to improve on the speed-quality trade-off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Trade-Off between Compression Ratio and Quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We find that decreasing the compression ratio of the first stage (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', to 32x) can improve the quality of low-frequency sounds, but in turn will slow down the model, as the second stage has to work on higher dimensional data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' As proposed later in Section 7, we hypothesize that using perceptually weighted loss functions instead of L2 loss during diffusion could help this trade-off, giving a more balanced importance to high frequency sounds even at high compression ratios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Text-Audio Binding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We find that the text-audio binding works well with CFG higher than 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Since the model is trained with metadata such as title, album, artist, genre, year, and chunk, the best keywords to control the generation appear to be frequent descriptive names, such as the genre of the music, or descriptions commonly found in titles, such as “remix”, “(Deluxe Edition)”, and possibly many more.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' A similar behavior has been observed and exploited in text- to-image models to generate better looking results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We find that the chunk based text-conditioning is coherent with the description, for example providing a description of the form “1 of N” will tend to result in a starting portion of a song, a description of the form “N of N” will tend to result in the ending portion of a song, and anything in between will tend to result in a song playing over the entire generation period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Future Work Data and Scaling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Increasing scale of both data and the model can very likely provide drastic quality improvements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Following (Dhariwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Borsos et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022) we suggest training with 50k-100k hours instead of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='5k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Using a larger pretrained language model to obtain text embed- dings has been shown to be very important for quality in images (Saharia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022), we hypothesize that the same is true if applied to our second-stage model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Diffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' More sophisticated diffusion samplers can be used to get higher quality for the same number of sampling steps, or similarly more advanced distillation techniques could be used (Salimans & Ho, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Some promising future modelling approaches that need more experimentation include: (1) training diffusion models using perceptual losses on the waveforms instead of L2 — this might help decrease the initial size of the U-Net,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' as we would not have to process non-perceivable sounds,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (2) improving the quality of the diffusion autoencoder by using mel-spectrograms instead of magnitude spectrograms as input,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (3) other types of conditioning which are not text- based might be useful to navigate the audio latent space,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' which is often hard to describe in words — DreamBooth- like models (Ruiz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Conclusion In this work, we presented Moûsai, a waveform based audio generation method building on two diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' First, we trained a diffusion autoencoder to compress a magnitude only spectrogram 64x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Using a custom 1D U-Net, the com- pressed latent is decoded back to waveform by diffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In the second stage, we train a diffusion model to generate a new latent from noise while conditioning on text embed- dings extracted from a frozen T5 transformer model, using a similar 1D U-Net architecture as used in the first stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We show that — in contrast to earlier approaches — our model can generate minutes of high-quality music in real- time on a consumer GPU, with compelling text-audio bind- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In addition to trained models, we provide a collection of open-source libraries with the hope of facilitating future work in the field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We expect that the present work will help pave the way towards higher-quality, longer-context text-to-music generation for future applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 70 70 60 60 50 50 40 40 30 30 20 20 10 10 0 100 200 300 400 500 0 100 200 300 400 500 70 70 60 60 50 50 40 40 30 30 20 20 10 10 100 200 300 400 500 0 100 200 300 400 500Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion Author Contributions Flavio Schneider came up with the idea and implemented all the elements of this paper, which is part of his Master’s thesis at ETH Zürich (Schneider, 2023).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Zhijing Jin co-supervised the Master’s thesis and the work, conducted weekly meetings, helped designed the structure of the paper, and led the human evaluation experiments of this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Bernhard Schölkopf supervised the work and provided precious suggestions during the progress of this work, as well as extensive suggestions for the writing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' All of Flavio Schneider, Zhijing Jin, and Bernhard Schölkopf contributed significantly to the writing and pol- ishing of the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Acknowledgment We thank Stability AI for their generous support for the com- putational resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We are also grateful for the generous help by our annotators Andrew Lee, Aylin Gunal, Fernando Gonzalez, and Yiwen Ding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We thank Fernando Gonzalez and Zhiheng Lyu for helping to improve the format of the pa- per.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We thank Nasim Rahaman for early-stage discussions to improve the model design and contributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' This material is based in part upon works supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039B;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' and by the Machine Learning Cluster of Excellence, EXC number 2064/1 – Project number 390727645.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Zhijing Jin is sup- ported by PhD fellowships from the Future of Life Institute and Open Philanthropy, as well as the travel support from ELISE (GA no 951847) for the ELLIS program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' References Baevski, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhou, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Mohamed, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Auli, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' wav2vec 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='0: A framework for self-supervised learning of speech representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Larochelle, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ranzato, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Hadsell, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Balcan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Lin, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ), Ad- vances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' neurips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='cc/paper/2020/hash/ 92d1e1eb1cd6f9fba3227870bb6d7f07-Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Borsos, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Marinier, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Vincent, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kharitonov, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Pietquin, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Sharifi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Teboul, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Grangier, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Tagliasacchi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Zeghidour, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Audiolm: a language modeling approach to audio generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='03143, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='03143.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/ arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='03143.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Caillon, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' and Esling, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' RAVE: A variational autoencoder for fast and high-quality neural audio synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='05011, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/ abs/2111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='05011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Chang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Barber, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Maschinot, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Lezama, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Jiang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Yang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Murphy, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Freeman, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Rubinstein, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Krishnan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Muse: Text- to-image generation via masked generative transform- ers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='00704, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='00704.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/ arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='00704.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Défossez, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Copet, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Synnaeve, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Adi, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' High fi- delity neural audio compression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='13438, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='13438.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='13438.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Deng, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Dong, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Socher, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Li, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Fei-Fei, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ImageNet: A large-scale hierarchical image database.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Computer Vision and Pattern Recognition (CVPR), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 248–255, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Deng, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Bansal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Ramanan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Unsupervised audio- visual synthesis via exemplar autoencoders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In 9th Inter- national Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' OpenRe- view.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net/ forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='id=43VKWxg\\_Sqr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Devlin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Chang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Lee, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Toutanova, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' BERT: Pre-training of deep bidirectional transformers for lan- guage understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Proceedings of the 2019 Confer- ence of the North American Chapter of the Association for Computational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 4171–4186, Minneapolis, Minnesota, June 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='18653/v1/N19-1423.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://aclanthology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/N19-1423.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dhariwal, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Jun, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Payne, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kim, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Radford, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Sutskever, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Jukebox: A generative model for music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='00341, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' org/abs/2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='00341.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dieleman, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', van den Oord, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Simonyan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The challenge of realistic music generation: Modelling raw audio at scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Bengio, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wallach, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Larochelle, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Grauman, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Cesa-Bianchi, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Garnett, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 8000– 8010, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion neurips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='cc/paper/2018/hash/ 3e441eec3456b703a4fe741005f3981f-Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Doornbusch, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Gerhard nierhaus: Algorithmic composi- tion: Paradigms of automated music generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 34(3):70–74, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1162/COMJ\\\\_r\\\\_ 00008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1162/COMJ\\ _r\\_00008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Elizalde, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Deshmukh, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ismail, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Wang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CLAP: learning audio concepts from natural language supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='04769, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='04769.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/ 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='04769.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Engel, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Agrawal, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Gulrajani, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Donahue, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Roberts, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Gansynth: Adversar- ial neural audio synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In 7th International Con- ference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' id=H1xQVn09FX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Esser, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Rombach, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Ommer, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Taming trans- formers for high-resolution image synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 12873–12883.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Computer Vision Foundation / IEEE, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1109/CVPR46437.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='01268.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://openaccess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='thecvf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='com/ content/CVPR2021/html/Esser_Taming_ Transformers_for_High-Resolution_ Image_Synthesis_CVPR_2021_paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Forsgren, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' and Martiros, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Riffusion - Stable diffusion for real-time music generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https: //riffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='com/about.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Giraudo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Generation of musical patterns through operads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='12432, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' org/abs/2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='12432.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Hinton, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' and Salakhutdinov, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Reducing the di- mensionality of data with neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' science, 313 (5786):504–507, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ho, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' and Salimans, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Classifier-free diffusion guid- ance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='12598, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='12598.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/ arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='12598.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ho, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Chan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Saharia, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Whang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Gao, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Gritsenko, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kingma, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Poole, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Norouzi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Fleet, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Salimans, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Imagen video: High definition video generation with diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='02303, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='02303.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='02303.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kim, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Hong, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Ro, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Lip to speech synthesis with visual context attentional GAN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Ranzato, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Beygelzimer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Dauphin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Liang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Vaughan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2758– 2770, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' neurips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='cc/paper/2021/hash/ 16437d40c29a1a7b1e78143c9c38f289-Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kingma, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' and Welling, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Auto-encoding variational bayes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' and LeCun, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ), 2nd Interna- tional Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/ abs/1312.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='6114.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kong, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ping, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Huang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhao, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Catanzaro, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Diffwave: A versatile diffusion model for audio synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In 9th International Conference on Learn- ing Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https: //openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='id=a-xFK8Ymz5J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kreuk, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Synnaeve, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Polyak, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Singer, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Dé- fossez, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Copet, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Parikh, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Taigman, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Adi, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Audiogen: Textually guided audio generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='15352, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='15352.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/ arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='15352.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kumar, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kumar, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', de Boissiere, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Gestin, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Teoh, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Sotelo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', de Brébisson, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Courville, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Melgan: Generative adversarial networks for conditional waveform synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Wallach, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Larochelle, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Beygelzimer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', d’Alché-Buc, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Fox, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Garnett, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ), Advances in Neural In- formation Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 14881–14892, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' neurips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='cc/paper/2019/hash/ 6804c9bca0a615bdb9374d00a9fcba59-Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Lam, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Su, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Yu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' BDDM: bilat- eral denoising diffusion models for fast and high-quality speech synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https: //openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='id=L7wzpQttNO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Lee, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kim, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kim, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Cho, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Han, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Au- toregressive image generation using residual quantiza- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In IEEE/CVF Conference on Computer Vision Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 11513–11522.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' IEEE, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1109/CVPR52688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='01123.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https:// doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1109/CVPR52688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='01123.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Leng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Chen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Guo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Liu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Chen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Tan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Mandic, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', He, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Qin, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Liu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Binauralgrad: A two-stage conditional diffu- sion probabilistic model for binaural audio synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='14807, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='14807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/ arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='14807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Li, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Xu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhou, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Lin, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zeng, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ji, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Chang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Clip-event: Connecting text and images with event structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 16399–16408.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' IEEE, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1109/CVPR52688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='01593.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1109/ CVPR52688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='01593.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Loshchilov, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' and Hutter, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Decoupled weight decay regu- larization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https: //openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='id=Bkg6RiCqY7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Mehri, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kumar, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Gulrajani, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kumar, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Jain, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Sotelo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Courville, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Samplernn: An unconditional end-to-end neural audio generation model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' id=SkxKPDv5xl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Morrison, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kumar, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kumar, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Seetharaman, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Courville, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Chunked autoregressive GAN for conditional waveform synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' OpenRe- view.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net/ forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='id=v3aeIsY\\_vVX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ouyang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Jiang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Almeida, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wainwright, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Mishkin, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Agarwal, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Slama, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ray, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Schulman, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Hilton, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kelton, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Miller, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Simens, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Askell, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Welinder, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Christiano, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Leike, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Lowe, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Training language models to follow instructions with human feedback.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='02155, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='02155.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/ arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='02155.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pasini, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' and Schlüter, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Musika!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' fast infinite waveform music generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='08706, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='08706.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/ 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='08706.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pennington, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Socher, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Manning, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' GloVe: Global vectors for word representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1532–1543, Doha, Qatar, October 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Association for Computa- tional Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='3115/v1/D14-1162.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://aclanthology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/D14-1162.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Preechakul, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Chatthee, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wizadwongsa, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Suwa- janakorn, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Diffusion autoencoders: Toward a meaning- ful and decodable representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 10609–10619.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' IEEE, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1109/CVPR52688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='01036.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1109/ CVPR52688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='01036.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Radford, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Narasimhan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Salimans, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Sutskever, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Improving language understanding by generative pre- training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Technical report, OpenAI, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Raffel, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Shazeer, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Roberts, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Lee, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Narang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Matena, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhou, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Li, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Liu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Exploring the limits of transfer learning with a unified text-to-text trans- former.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 21:140:1–140:67, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL http://jmlr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/papers/v21/20-074.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ramesh, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Dhariwal, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Nichol, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Chu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Chen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Hierarchical text-conditional image generation with CLIP latents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='06125, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='06125.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/ 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='06125.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Rombach, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Blattmann, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Lorenz, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Esser, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Ommer, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' High-resolution image synthesis with latent diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Or- leans, LA, USA, June 18-24, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 10674–10685.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' IEEE, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1109/CVPR52688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='01042.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1109/CVPR52688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='01042.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ronneberger, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Fischer, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Brox, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' U-net: Con- volutional networks for biomedical image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Navab, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Hornegger, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', III, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Frangi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ), Medical Image Computing and Computer- Assisted Intervention - MICCAI 2015 - 18th International Conference Munich, Germany, October 5 - 9, 2015, Pro- ceedings, Part III, volume 9351 of Lecture Notes in Computer Science, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 234–241.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Springer, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1007/978-3-319-24574-4\\_28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1007/978-3-319-24574-4_28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion Ruiz, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Jampani, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Pritch, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Rubinstein, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Aberman, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dreambooth: Fine tuning text-to- image diffusion models for subject-driven generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='12242, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='12242.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/ arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='12242.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Saharia, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Chan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Saxena, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Whang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Denton, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ghasemipour, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ayan, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Mah- davi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Lopes, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Salimans, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ho, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Fleet, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Norouzi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Photorealistic text-to-image diffusion models with deep language understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='11487, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='11487.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/ arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='11487.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Salas, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Gelbukh, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Calvo, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Soria, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Automatic music composition with simple proba- bilistic generative grammars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Polibits, 44:59–65, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='17562/pb-44-9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/ 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='17562/pb-44-9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Salimans, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' and Ho, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Progressive distillation for fast sampling of diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In The Tenth Interna- tional Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' id=TIdIXIpzhoI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Schneider, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ArchiSound: Audio generation with diffu- sion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' January 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='com/ flavioschneider/master-thesis/blob/ main/audio_diffusion_thesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='pdf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Song, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Meng, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Ermon, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Denoising diffusion im- plicit models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In 9th International Conference on Learn- ing Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https: //openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='id=St1giarCHLP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' van den Oord, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Dieleman, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zen, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Simonyan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Vinyals, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Graves, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kalchbrenner, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Senior, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Kavukcuoglu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Wavenet: A generative model for raw audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In The 9th ISCA Speech Synthesis Workshop, Sunnyvale, CA, USA, 13-15 September 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 125.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ISCA, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL http://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='isca-speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' org/archive/SSW_2016/abstracts/ssw9_ DS-4_van_den_Oord.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' van den Oord, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Vinyals, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Kavukcuoglu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Neural discrete representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Guyon, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', von Luxburg, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Bengio, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wallach, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Fergus, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Vishwanathan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Garnett, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 6306–6315, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' neurips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='cc/paper/2017/hash/ 7a98af17e63a0ac09ce2e96d03992fbc-Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Villegas, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Babaeizadeh, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kindermans, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Moraldo, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Saffar, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Castro, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kunze, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Erhan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Phenaki: Variable length video generation from open domain textual description.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='02399, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='02399.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='02399.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Yang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Yu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Weng, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zou, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Yu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Diffsound: Discrete diffusion model for text-to-sound generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='09983, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='09983.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='09983.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Yu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Lu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Hu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Tan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ye, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Qin, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Liu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Museformer: Transformer with fine- and coarse-grained attention for music generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='10349, 2022a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='10349.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/ arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='10349.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Yu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Xu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Koh, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Luong, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Baid, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Vasudevan, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ku, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ayan, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Hutchinson, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Han, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Parekh, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Baldridge, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Scaling autoregressive models for content- rich text-to-image generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='10789, 2022b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='10789.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='10789.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Zeghidour, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Luebs, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Omran, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Skoglund, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Tagliasacchi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Soundstream: An end-to-end neu- ral audio codec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' IEEE ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Audio Speech Lang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 30:495–507, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1109/ TASLP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='3129994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1109/TASLP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='3129994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Text Prompts We list all the text prompts composed for the four common music genres in Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion Genre = Electronic – Drops,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kanine Remix,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Darkzy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Drops Remixes,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' bass house,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (Deluxe) (Remix) 3 of 4 – Electronic,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dance,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' EDM (Deluxe) (Remix) 3 of 4 – Electro House (Remix),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2023,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Electro Swing Remix 2030 (Deluxe Edition) 3 of 4 – Future Bass,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' EDM (Remix) 3 of 4,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Remix – EDM (Deluxe) (Remix) 3 of 4 – EDM,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Vocal,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Relax,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Remix,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2023,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 8D Audio – Hardstyle,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Drop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 8D,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Remix,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' High Quality,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2 of 4 – Dubstep Insane Drop Remix (Deluxe Edition),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2 of 4 – Drop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' French 79,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' BPM Artist,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 4,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Electronica,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2016 Genre = Hip Hop – Real Hip Hop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2012,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Lil B,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Gods Father,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' escape room,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – C’est toujours pour ceux qui savent,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' French Hip Hop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2018 (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Dejando Claro,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Latin Hip Hop 2022 (Deluxe Edition) 3 of 4 – Latin Hip Hop 2022 (Deluxe Edition) 3 of 4 – Alternative Hip Hop Oh-My,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2016,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Es Geht Mir Gut,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' German Hip Hop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2016,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Italian Hip Hop 2022 (Deluxe Edition) 3 of 4 – RUN,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Alternative Hip Hop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2016,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Hip Hop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Rap Battle,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2018 (High Quality) (Deluxe Edition) 3 of 4 – Hip Hop Tech,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Bandlez,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Hot Pursuit,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' brostep,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 Genre = Metal – Death Metal,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2012,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Heavy Death Metal (Deluxe Edition),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Black Alternative Metal,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The Pick of Death (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2006,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Kill For Metal,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Iron Fire,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' To The Grave,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' melodic metal,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Melodic Metal,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Iron Dust (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2006,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Possessed Death Metal Stones (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2006,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Black Metal Venom,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2006,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – The Heavy Death Metal War (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2006,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Heavy metal (Deluxe Edition),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Viking Heavy Death Metal (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2006,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 Genre = Pop – (Everything I Do),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' I Do It For You,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Bryan Adams,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The Best Of Me,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' canadian pop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Payphone,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Maroon 5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Overexposed,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2021,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – 24K Magic,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Bruno Mars,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 24K Magic,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' dance pop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Who Is It,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Michael Jackson,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dangerous,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pop (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Forget Me,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Lewis Capaldi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Forget Me,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pop Pop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2022,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Pop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Speak Now,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Taylor Swift,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2014,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Pop Pop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Maroon 5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Overexposed,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2016,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Pointless,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Lewis Capaldi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pointless,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2022,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Saved,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Khalid,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' American Teen,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2022,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Deja vu,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Fearless,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2020,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Text prompts composed for the four common music gen- res: electronic, hip hop, metal, and pop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'}