diff --git "a/7dA0T4oBgHgl3EQfOf8M/content/tmp_files/load_file.txt" "b/7dA0T4oBgHgl3EQfOf8M/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/7dA0T4oBgHgl3EQfOf8M/content/tmp_files/load_file.txt" @@ -0,0 +1,482 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf,len=481 +page_content='ANNA: Abstractive Text-to-Image Synthesis with Filtered News Captions Aashish Anantha Ramakrishnan Sharon X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Huang Dongwon Lee The Pennsylvania State University, State College, Pennsylvania, USA {aza6352, suh972, dul13}@psu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='edu Abstract Advancements in Text-to-Image synthesis over recent years have focused more on improving the quality of gener- ated samples on datasets with descriptive captions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' How- ever, real-world image-caption pairs present in domains such as news data do not use simple and directly descrip- tive captions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' With captions containing information on both the image content and underlying contextual cues, they be- come abstractive in nature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In this paper, we launch ANNA, an Abstractive News captioNs dAtaset extracted from on- line news articles in a variety of different contexts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We explore the capabilities of current Text-to-Image synthesis models to generate news domain-specific images using ab- stractive captions by benchmarking them on ANNA, in both standard training and transfer learning settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The gen- erated images are judged on the basis of contextual rele- vance, visual quality, and perceptual similarity to ground- truth image-caption pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Through our experiments, we show that techniques such as transfer learning achieve lim- ited success in understanding abstractive captions but still fail to consistently learn the relationships between content and context features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The ANNA Dataset is available at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='com/aashish2000/ANNA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Introduction Image Generation has been improving by leaps and bounds over the last few years thanks to advancements in Generative Modelling approaches and availability of higher compute capacities [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Areas such as text-to-image syn- thesis have grown in prominence due to the development of model pre-training paradigms on vast image-text pairs mined from the internet [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' This has promoted the use of these generators for a variety of applications such as online content creation, art synthesis [14] and even more malicious use-cases such as DeepFake generation [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' With Internet news media and social networking websites becoming the more preferred forms of news distribution, the impact that generative modelling, especially semantically-relevant im- age generation can have on the news media industry is sig- Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Example of descriptive captions from the COCO Cap- tions dataset [2] (Above) and abstractive captions from the ANNA (Below).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In this case, the abstractive captions contain high-level visual content information relevant to the type of room depicted and contextual information explaining who are its inhabitants, who sponsored it, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' nificant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Images accompanying news articles are primarily used as supporting media to convey the key message of the article along with complementary information to aid reader understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Commonly, text-to-image synthesis has made use of de- scriptive captions, where only visual objects present within each image are described in detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' However, news captions also relay contextual information correlating the image’s contents to the article.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The captions are thus abstractive (beyond being descriptive), containing both higher-level de- scriptive information and contextual cues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Here, we define context of a text caption as an attribute that does not have a direct visual translation, but contributes towards modifying an image’s appearance in relation to the situation in which the image is referenced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 1 provides an example of this where both images depict rooms within living spaces, but there is a noticeable difference in the appearance of a room within a house and that of a shelter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The study of pragmatic reasoning in linguistics [5] typically deals with how the in- 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='02160v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='CV] 5 Jan 2023 Caption: This room has a bed with blue sheets and a large bookcase Caption: A room in a shelter for victims of domestic violence that was able to reopen recently because of a contribution from a donorformativeness of text is influenced by its relevance to con- text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Past research has established the importance of prag- matic factors in ascertaining the true meaning of context- driven text information and how it affects accurate caption- ing of images [22], [21] [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Directly descriptive captions lack this contextual grounding, limiting their usefulness for describing news images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' To replicate the same types of im- ages with contextual relevance using descriptive captions, we require intensive caption engineering efforts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' This com- bination of image content information along with contex- tual cues make abstractive captions much more challenging to understand, directly impacting the relevance of generated results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Current datasets for Text-to-Image synthesis are either focused on narrow domains with simple, descriptive cap- tions or contain minimally filtered image-text pairs from a multitude of online sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' There are not many domain- specific datasets with image caption pairs containing con- textual information in addition to image descriptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Addi- tionally, while most models use improved visual quality of output images to be indicators of superior performance, not much focus is placed on evaluating the correlation between the output image and input text captions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' This becomes more important when dealing with captions whose features are only partially aligned with the ground truth images due to its non-descriptive nature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The task of abstractive text-to- image synthesis aims to generate images from abstractive captions with contextual cues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' To evaluate this task, we de- sign ANNA, a dataset containing abstractive captions per- taining to news image-caption pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Abstractive captions can motivate text-to-image synthesis models to effectively identify these different feature types along with their rel- ative importance and represent them appropriately in gen- erated images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' With current Text-to-Image architectures implicitly delineating content and context features, we pro- vide detailed visualizations of both their success and failure cases on ANNA and the need for better understanding of sentence structures for generating image features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Our contributions in this paper can be summarized as the following: We introduce ANNA,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' a dataset containing approxi- mately 30K abstractive image-caption pairs from pop- ular media outlet The New York Times We show how current Text-to-Image architectures are able to understand abstractive captions and transfer- learned concepts from descriptive captions for abstrac- tive text-to-image synthesis Using an exhaustive set of evaluation metrics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' we benchmark popular Text-to-Image architectures on the basis of generated image quality,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' image similarity to ground truth images and contextual relevance with ref- erence captions 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Related Work Text-to-Image Synthesis Text-to-Image synthesis is a multi-modal generation task that produces relevant images conditioned on features described in a text caption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Ini- tial approaches such as [16] found success by leveraging Generative Adversarial Networks (GANS) [4] for this task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Motivated by the success of GAN’s, StackGAN [30] uses a stacked generator to simplify the generation pipeline into stages: semantically relevant low-resolution image synthe- sis followed by progressive up-scaling and defect correc- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' AttnGAN [26] integrates an attention mechanism to capture sentence and word level features for increasing the correlation between generated images and input text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' At- tnGAN proved to be a strong baseline based on which mul- tiple advancements were developed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' One such approach, DMGAN [33] integrates a dynamic memory based refine- ment module for improving image quality and key-word se- lection from reference captions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' [29] and [27] build on the same model architecture by introducing Contrastive learn- ing approaches to improve consistency between learned text and image representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In recent years, the success of Vision-Language Pre-training (VLP) has prompted the de- velopment of newer and more robust Text-to-Image synthe- sis architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Contrastive Language-Image Pre-training (CLIP) [13], is one of the largest open-source, pre-trained models that uses raw text for supervising the learning pro- cess of visual concepts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Using pre-trained encoders such as CLIP for input text captions, [32], [15], [14] use differ- ent generator architectures such as GANs, Auto-regressive Transformers and Diffusion models respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Datasets Traditional datasets used as benchmarks for measuring Text-to-Image synthesis include domain-specific datasets Oxford-102 Flowers [12] and CUB-200 [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The Oxford-102 Flowers contains images of 102 classes of flow- ers along with 5 human-annotated descriptions per im- age.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Similarly the CUB-200 dataset contains 11,788 im- ages of 200 subcategories belonging to different categories of birds along with 5 captions per image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The captions for each of the images in CUB-200 and Oxford-102 were collected and released by [16] as a part of their evalua- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' COCO Captions [2] is another popular dataset devel- oped using images from the MS-COCO [9], a large-scale object detection dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' It contains over one and a half million captions describing over 330,000 images contain- ing 80 different classes of everyday objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Some of the other datasets used for this task include the Multi-Modal- CelebA-HQ Dataset [25] which provides text-descriptions of facial features for images sourced from the CelebA-HQ dataset [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Conceptual Captions [18] consists of over 3 million image-caption pairs mined from the internet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In this dataset, all the captions are hypernymized, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' all proper 2 nouns and named-entities are replaced with their respective hypernynms to make the captions simpler to learn and more descriptive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' [1] expands this dataset by increasing the num- ber of image-caption pairs from 3 million to 12 million.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' All the datasets discussed above focus on descriptive captions for each image, where minimal or no contextual informa- tion regarding the image is present.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Our dataset is one of the first to investigate the previously unexplored interaction between content and context features for text-to-image syn- thesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Constructing Abstractive News Captions Dataset: ANNA The ANNA (Abstractive News captioNs dAtaset) has been constructed to perform news image generation us- ing abstractive captions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We source images from the NY- Times800K dataset [19] which contains news articles and associated image-caption pairs scraped from the news or- ganization The New York Times (NYT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' This dataset was originally developed for News Image Captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Using news image-caption pairs from a reputable media outlet such as NYT helps ensure the dataset’s quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' As we aim to observe the relationship between content and con- text features and how it translates to generated images, we focus on selecting generalizable entities within our dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' News data contains a multitude of named-entities, often with very low repetition and distinct physical appearances, such as faces and geographic landmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The inclusion of named-entities from news images would drastically in- crease the complexity of the generative task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The inabil- ity to accurately generate named-entity attributes would fur- ther hamper context feature representation due to their inter- dependent nature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In order to combat the mentioned issues, we carefully curate our dataset to include image-caption pairs containing adequate contextual and content related in- formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We select Image-caption pairs with lesser de- pendence on named-entities and more general visual com- ponents to make the task feasible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The specific preprocess- ing and filtering approaches utilized are detailed below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Preprocessing and Filtering Approaches The original NYTimes800K dataset contains 445K news articles accompanied by 793K image-caption pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' It spans 14 years of articles published on The NYT website.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The dataset has been provided as a MongoDB dump for public access.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The first step of preprocessing focuses on removing image-caption pairs with explicit entities described both in images and text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We use the provided NER tags for each caption for filtering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We exclude all captions containing the NER tags ’PERSON’, ’GPE’, ’LOC’, ’WORK OF ART’, ’ORG’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' This ensures any visually significant named-entity without adequate description isn’t present in the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Subsequently, we also set bounds on the caption length be- tween 4 to 70 words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Any captions lesser than 4 words would not be informative enough for extracting usable fea- tures and captions larger than 70 words cannot be handled by the CLIP-based Text encoder [13] that we employ in our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Following caption-based filtering, we also remove all images where human faces are clearly visible in the fore- ground.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We accomplish this by using a RetinaFace-based face detector [3], removing around 1000 additional im- ages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Through these filtering techniques, we extract rel- evant image-caption pairs and corresponding article head- lines from the NYTimes800K Dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Data pre-processing steps include uniformly scaling our news images to our tar- get input resolution 256x256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' To accomplish this, we rela- tively scale the smaller dimension (height or width) of the image to our target resolution and take its center crop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' This makes sure that we have minimal information loss and also helps center the foreground objects in each image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Dis- claimer: The dataset samples may use words or language that is considered profane, vulgar, or offensive by some readers as they are extracted from real-world news articles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Dataset Insights The filtered and pre-processed version of the ANNA con- tains 29625 image-text pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We split the dataset into Train, Validation and Testing sets in the ratio of 80%:10%:10% re- spectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' All metric scores reported have been calculated on the Test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' To better understand the composition of the dataset, we analyze various attributes of the image-text pairs and the articles they have been selected from.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Dataset Unique Tokens Caption Length Mean StdDev ANNA Train 17897 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='1 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='75 ANNA Validation 1622 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='8 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='60 ANNA Test 1649 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='1 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='71 COCO Captions Train 11046 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='75 COCO Captions Validation 4758 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='74 Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Dataset Statistics of ANNA and COCO-Captions 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='1 Caption Statistics In this section, we evaluate different statistical measures for quantifying the distribution of captions across the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Table 1 shows the average caption length of captions present in the dataset and across the train, validation and test sub- sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We see that the average caption lengths are simi- lar across the different data splits with the average caption length being slightly greater than that of the COCO Cap- tions dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We also show the standard deviation in cap- tions sizes across different image-caption pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We also ex- amine the words appearing in these captions by identifying 3 Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Object Frequency Analysis using Treemaps unique tokens present.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' To calculate the unique tokens, we use the spaCy library for tokenizing and lemmatizing our captions along with the removal of all stop words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Subse- quently, we tag the different Parts of Speech (POS) present and select tokens that belong to the classes [Common Noun, Proper Noun, Adjectives and Verbs].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' This provides a mini- mum guarantee that the abstractive captions present are long enough to contain adequate content and contextual features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' This analysis also ensures that the composition captions present in the train, validation and test splits are consistent with each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='2 News Image Analysis Along with the captions, we also estimate image proper- ties such as the number of recognizable objects present in each image and average number of detected objects per image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We use a YOLO-R based object detector [24] for identifying the objects present in each image of our dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The YOLO-R detector has been trained on the MS-COCO dataset, containing 80 unique object classes of common- place objects [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We test the pre-trained model with 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='4 as the confidence threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We find that there are an average of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='57 objects per image in the ANNA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2 shows the most frequently appearing classes of objects in our dataset using a treemap for visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='3 Categories of News Articles Selected In this section, we identify the different types of news ar- ticles from which image-caption pairs were sourced for dataset construction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In total, there exist 123 unique article topics within our dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Only 13 of image-caption pairs do not have accompanying article type information so we dis- regard those pairs from our article topic analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' From Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 3, we see that there exists a good distribution across topics such as Dining, Business, Real Estate, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' This shows that the news image-caption pairs are diverse and not limited to only a particular type of news article.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Experiments In order to understand how different architectures learn abstractive captions on the ANNA, we consider various text- to-image synthesis models previously proposed in litera- ture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The three model architectures we test as a part of our evaluation are: Lafite [32], AttnGAN+CL [26] and DM- GAN+CL [27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' These models are selected for comparison as they are among the top-10 on the COCO Captions Text- to-Image synthesis leaderboard and take significantly dif- ferent approaches for tackling the same task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' As all these models have achieved State-of-the-Art scores on descrip- tive caption datasets, we evaluate how they perform with news domain-specific, abstractive captions in our experi- ments and visualize our results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Text-to-Image Synthesis Models The Lafite model uti- lizes a pre-trained CLIP encoder for translating text em- beddings into the image feature space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' It adapts an un- conditional StyleGAN2 generator [7] by injecting text- conditional information through affine transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Two Fully Connected Layers are utilized to transform the input text features to be more semantically similar with StyleGAN’s image Stylespace.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In our experiments, we train Lafite on ANNA in a fully-supervised setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We train 2 variants of Lafite, with and without Transfer Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In the non-transfer learning variant, we train it on the ANNA 4 Object Frequency Analysis person chair book cake spoon couch boat 17,069 5,252 2,073 1,179 1,087 1,064 1,001 potted plant carrot bed wine fork 1,964 890 702 627 dining table bench 4,090 cup 868 6op 1,717 clock knife 835 bird bowl 1,669 3,779 cell phone cat bus car orange truck tv 6,155 1,332 780 airplane bottle 2,416 vase donut train 1,183 762 cowFigure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Visualizing Article Categories of image-caption pairs present in ANNA Model IS (↑) FIDCLIP (↓) LPIPS(↓) CLIPScore (↑) Lafite (Transfer Learning) 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='49 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='7470 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='7575 Lafite (Base) 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='59 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='48 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='7432 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='7277 DMGAN+CL (512 dim) 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='07 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='7568 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='5913 DMGAN+CL (256 dim) 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='37 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='87 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='7581 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='5861 AttnGAN+CL (512 dim) 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='56 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='7623 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='5695 AttnGAN+CL (256 dim) 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='06 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='41 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='7616 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='5748 Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Results of Abstractive Text-to-Image synthesis on ANNA until convergence for 4000 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' To perform Transfer Learning, we initialize the model with pre-trained weights from the Conceptual Captions (CC3M) dataset [18] and continue training on the ANNA until convergence for 2000 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The AttnGAN+CL and DMGAN+CL models share sim- ilar architectures, with both utilizing a Deep Attentional Multimodal Similarity Model (DAMSM) for computing the similarity between extracted images and text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' These archi- tectures have been supplemented with a Constrastive Learn- ing Loss function along with their DAMSM loss to improve pre-training performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We first train the DAMSM mod- ule on the Train and Validation sets of our dataset to con- struct the mapping between image and text features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We compare 2 different embedding sizes of the DAMSM mod- ule for both models: 256 and 512.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The default AttnGAN and DMGAM models have 256 embedding feature vectors by default, but the CLIP based model Lafite uses 512 em- bedding feature vectors instead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Thus, we train the models with both embedding sizes to ensure a fair comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Evaluation Metrics To evaluate the performance of these architectures, we report 4 different metrics: Inception Score (IS), Fr´echet Inception Distance (FID), Learned Perceptual Image Patch Similarity (LPIPS) and CLIPScore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' IS and FID evaluate the quality and diversity of generated images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' They estimate probability distribution properties of the generated images and how far it diverges from that of the reference im- ages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' For FID, we adapt the proposed FIDCLIP from [8] due to its closer correspondence with human judgement on real-world, diverse datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' LPIPS judges the perceptual similarity between the reference and generated images us- ing deep features extracted across image patches instead of measuring pixel-level similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We use LPIPS version 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='1 for our testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Since LPIPS is an image-wise similarity metric, we report the average of scores obtained by the gen- erated test set images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' CLIPScore is a reference-free metric that can be employed to evaluate the relevance of input text captions to the content of generated images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We selected these 4 metrics as they provide a holistic evaluation of the different key aspects involved in measuring text-to-image model performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We report our scores in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 5 Article Categories Metro 3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='646 Dining 3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='010 Business RealEstate 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='573 Science 12,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='224 National Foreign 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='023 Travel 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='432 Culture 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='120 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='Styles ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='Sports ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='959 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='Weekend ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='805 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='Home ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='601 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='Magazine ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='TStyle ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='412 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='Automobiles ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='SundayBusiness ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='398 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='Metropolitan ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='Arts&Leisure ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='1342 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='OpEd ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='NYTNOW ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='1247 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='BookReview ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='Escapes ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='1199 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='SpecialSections ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='Learning ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='158 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='CityWeekly ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='Express ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='140 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='Washington ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='Upshot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='109 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='Regionals ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='200 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='400 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='600 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='800 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='1000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='1200 1400 1600 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='1800 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='2000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='2200 2400 2600 2800 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='3000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='32003400 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='3600 3800 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='Count =(a) Original Image ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='(b) Lafite (Transfer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='Learning) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='(c) Lafite (Base) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='(d) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='DMGAN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='(512 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='dim) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='(e) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='DMGAN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='(256 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='dim) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='(f) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='AttnGAN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='(512 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='dim) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='(g) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='AttnGAN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='(256 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='dim) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Result Visualization for Caption: The castle, draped with vines and adorned with bougainvillea, is set on 10 acres, with gardens, a swimming pool and a private chapel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' (a) Original Image (b) Lafite (Transfer Learning) (c) Lafite (Base) (d) DMGAN (512 dim) (e) DMGAN (256 dim) (f) AttnGAN (512 dim) (g) AttnGAN (256 dim) Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Result Visualization for Caption: Pollutants in the Gowanus Canal include pesticides, heavy metals and carcinogens like PCBs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Evaluation of Generated Samples Image Quality From the reported IS and FID scores, we can clearly identify that Lafit with Transfer Learning out- performs all other models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Although the IS score of the baseline model is lower than that of DMGAN+CL, this trend is reversed in FID scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' This result can be attributed to the fact that the Inception model feature space is aligned to the classes present in ImageNet, hence penalizing other datasets that diverge from this distribution [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The updated CLIP feature space used for computing FIDCLIP helps mitigate this issue and makes the metric more resistant to fluctuations caused by image preprocessing and distortions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' These results also correlate with observed image quality on other benchmark datasets, such as COCO Captions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We provide visualizations of generated outputs from the test set for all the trained models in Figures 4, 5, 6, 7, 8, 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Delineation between Content and Context features The Lafite (Transfer Learning) model benefits from learned associations between visual concepts and text represen- tations in the absence of extremely descriptive captions, which corroborates its high CLIPScore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Similarly, for the other models trained without transfer learning on our dataset, we observe that the LPIPS score and CLIP- Score follow the same trajectory as FIDCLIP with the Lafite (Base) model exhibiting the best correlation between ground truth image similarity and relevance with reference captions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' These results show that the top performing models do have an implicit understanding of what constitutes image content and context information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' But limitations still exist for implicit delineation of captions features, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' With the reference image and descriptive section of the caption dealing with the image of an animal tracking de- vice, the Text-to-Image models incorrectly generate an an- 6 (a) Original Image (b) Lafite (Transfer Learning) (c) Lafite (Base) (d) DMGAN (512 dim) (e) DMGAN (256 dim) (f) AttnGAN (512 dim) (g) AttnGAN (256 dim) Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Result Visualization for Caption: Left, the New Museum and the original adjacent building it purchased 12 years ago on the Bowery, at right.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' (a) Original Image (b) Lafite (Transfer Learning) (c) Lafite (Base) (d) DMGAN (512 dim) (e) DMGAN (256 dim) (f) AttnGAN (512 dim) (g) AttnGAN (256 dim) Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Result Visualization for Caption: The rooms at the Ace Hotel have high ceilings and oversized windows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Some of the larger rooms and suites includes details like guitars, turntables and vinyl records.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' imal as the image foreground rather than the tracker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Thus, comprehension of caption structures and explicit feature de- lineation must be improved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' These experiments demon- strate the need for non-descriptive image-captions datasets, such as ANNA for bridging the performance gap between descriptive and abstractive captions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Discussion and Conclusion Our experiments demonstrate how existing text-to-image architectures understand abstractive captions present in domain-specific data such as news media.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' We show that implicit delineation between content and context features have limitations, prompting the need for explicit feature de- lineation and modified objective functions to better suit this task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' One major impact of understanding abstractive cap- tions such as those present in ANNA is the reduction in re- quirements for directly descriptive captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' As the size of datasets keep increasing, scaling up human annotation of images to match demand adds a huge overhead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' As descrip- tive captions need to be tightly-coupled with the reference image’s contents, there needs to be multiple rounds of eval- uation and filtering, making it a manually tedious task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The use of abstractive captions for images can greatly simplify the human annotation process for datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Additionally, ANNA motivates the development of journalism assistance solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The use of keywords and descriptive prompts with current image generators involves a lot of prompt en- gineering to get relevant images for a specific topic [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' High quality images are generated only when a particu- larly restrictive sentence structure and vocabulary is used in prompts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' As models are trained to understand abstractive captions, the requirements for intensive prompt engineering would be significantly reduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Similarly, achieving better delineation between different feature types present in non- 7 (a) Original Image (b) Lafite (Transfer Learning) (c) Lafite (Base) (d) DMGAN (512 dim) (e) DMGAN (256 dim) (f) AttnGAN (512 dim) (g) AttnGAN (256 dim) Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Result Visualization for Caption: The Full Orange: two all-beef patties, special sauce, lettuce.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' (a) Original Image (b) Lafite (Transfer Learning) (c) Lafite (Base) (d) DMGAN (512 dim) (e) DMGAN (256 dim) (f) AttnGAN (512 dim) (g) AttnGAN (256 dim) Figure 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Result Visualization for Caption: With the RoamEO base unit, left (which includes a collar), a dog owner can get radio signals tracking the animal’s location, up to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='5 miles away.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' descriptive captions can also benefit related tasks such as image retrieval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The addition of context can play a major role in influencing the quality of retrievals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Limitations This paper aims at introducing the potential of abstractive captions to motivate the development of more contextually-grounded text-to-image synthesis models, par- ticularly when synthesizing news-domain specific images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Although news articles contain a lot of named-entities, we choose to filter them out and instead focus on context fea- tures that can be inferred from text captions and depicted by general visual concepts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Developing text-to-image synthe- sis architectures that can take advantage of named-entities using external knowledge bases as reference would help overcome this limitation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Large-scale human evaluation of images generated by text-to-image architectures on abstrac- tive captions is another important step towards measuring their relative performance, which we aim to perform as a part of our future research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Potential negative societal impacts Image generation ar- chitectures have the potential to be misused for nefarious use-cases such as spreading disinformation [31] and gen- erating neural fake news [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Our current preprocessing pipeline removes most images containing named-entities, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' public figures and locations of national importance, con- tributing towards risk mitigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' However, we recognize the threat posed by contextually-relevant Deepfake images when dealing with news media images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Future research di- rections include understanding the extent up to which text- to-image models can be used for neural fake news genera- tion and identifying appropriate detection strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Acknowledgements This research has been partially supported by NSF Awards #1820609 and #2114824.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 8 References [1] Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3558–3568.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' IEEE, June 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 3 [2] Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedan- tam, Saurabh Gupta, Piotr Dollar, and C Lawrence Zitnick.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Microsoft COCO captions: Data collection and evaluation server.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' arXiv preprint arXiv:1504.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='00325, Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 1, 2, 4 [3] Jiankang Deng, Jia Guo, Evangelos Ververas, Irene Kot- sia, and Stefanos Zafeiriou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Retinaface: Single-shot multi- level face localisation in the wild.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5203–5212.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' IEEE, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 3 [4] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Generative adversarial nets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, volume 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=', 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2 [5] Herbert P Grice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Logic and conversation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In Speech acts, pages 41–58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Brill, 1975.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 1 [6] Huaibo Huang, Zhihang Li, Ran He, Zhenan Sun, and Tieniu Tan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' IntroVAE: introspective variational autoencoders for photographic image synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS’18, pages 52–63, Red Hook, NY, USA, Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Curran Associates Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2 [7] Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Training generative adver- sarial networks with limited data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In Proceedings of the 34th International Conference on Neural Information Processing Systems, number Article 1015 in NIPS’20, pages 12104– 12114, Red Hook, NY, USA, Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Curran Associates Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 4 [8] Tuomas Kynk¨a¨anniemi, Tero Karras, Miika Aittala, Timo Aila, and Jaakko Lehtinen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The role of ImageNet classes in fr´echet inception distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Mar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 5, 6 [9] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Microsoft COCO: Common objects in context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In Computer Vision – ECCV 2014, pages 740–755.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Springer In- ternational Publishing, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2 [10] Vivian Liu and Lydia B Chilton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Design guidelines for prompt engineering Text-to-Image generative models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In Proceedings of the 2022 CHI Conference on Human Fac- tors in Computing Systems, number Article 384 in CHI ’22, pages 1–23, New York, NY, USA, Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Association for Computing Machinery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 7 [11] Allen Nie, Reuben Cohn-Gordon, and Christopher Potts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Pragmatic Issue-Sensitive image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1924–1938, Online, Nov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2 [12] Maria-Elena Nilsback and Andrew Zisserman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Automated flower classification over a large number of classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pages 722–729.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' IEEE, Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2 [13] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas- try, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Learning trans- ferable visual models from natural language supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' arXiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='00020 [cs], Feb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2, 3 [14] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Hierarchical Text-Conditional image gener- ation with CLIP latents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' arXiv:2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='06125 [cs], Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 1, 2 [15] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Zero-Shot Text-to-Image generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Feb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2 [16] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Lo- geswaran, Bernt Schiele, and Honglak Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Generative ad- versarial text to image synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In Proceedings of The 33rd International Conference on Machine Learning, pages 1060–1069.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' PMLR, June 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2 [17] Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' LAION- 400M: Open dataset of CLIP-Filtered 400 million Image- Text pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Nov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 1 [18] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Conceptual captions: A cleaned, hypernymed, im- age alt-text dataset for automatic image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In Pro- ceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556–2565, Melbourne, Australia, July 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2, 5 [19] Alasdair Tran, Alexander Mathews, and Lexing Xie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Trans- form and tell: Entity-aware news image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition, pages 13035–13045.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' openac- cess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='thecvf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='com, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 3 [20] A Tsirikoglou, G Eilertsen, and J Unger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' A survey of im- age synthesis methods for visual machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Forum, 39(6):426–451, Sept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 1 [21] Emiel van Miltenburg, Roser Morante, and Desmond Elliott.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Pragmatic factors in image description: The case of nega- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In Proceedings of the 5th Workshop on Vision and Language, pages 54–59, Berlin, Germany, Aug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Asso- ciation for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2 [22] Ramakrishna Vedantam, Samy Bengio, Kevin Murphy, Devi Parikh, and Gal Chechik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Context-Aware captions from Context-Agnostic supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1070–1079.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' IEEE, July 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2 [23] Catherine Wah, Steve Branson, Peter Welinder, Pietro Per- ona, and Serge Belongie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The Caltech-UCSD birds-200- 2011 dataset, July 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2 [24] Chien-Yao Wang, I-Hau Yeh, and Hong-Yuan Mark Liao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' You only learn one representation: Unified network for mul- tiple tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' May 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 4 [25] Weihao Xia, Yujiu Yang, Jing-Hao Xue, and Baoyuan Wu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' TediGAN: Text-Guided diverse face image generation and 9 manipulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2256–2265.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' IEEE, June 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2 [26] Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' AttnGAN: Fine- Grained text to image generation with attentional genera- tive adversarial networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1316– 1324, Salt Lake City, UT, USA, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' IEEE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2, 4 [27] Hui Ye, Xiulong Yang, Martin Takac, Rajshekhar Sunder- raman, and Shihao Ji.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Improving Text-to-Image synthesis using contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' The 32nd British Machine Vision Conference (BMVC), July 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2, 4 [28] Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' De- fending against neural fake news.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Neural Inf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Syst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=', 32, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 8 [29] Han Zhang, Jing Yu Koh, Jason Baldridge, Honglak Lee, and Yinfei Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Cross-modal contrastive learning for text-to- image generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In 2021 IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 833– 842.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' IEEE, June 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2 [30] Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiao- gang Wang, Xiaolei Huang, and Dimitris N Metaxas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Stack- gan: Text to photo-realistic image synthesis with stacked generative adversarial networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In Proceedings of the IEEE international conference on computer vision, pages 5907– 5915, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2 [31] Tao Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Deepfake generation and detection, a survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Multimed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Tools Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=', 81(5):6259–6276, Feb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 1, 8 [32] Yufan Zhou, Ruiyi Zhang, Changyou Chen, Chunyuan Li, Chris Tensmeyer, Tong Yu, Jiuxiang Gu, Jinhui Xu, and Tong Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Towards Language-Free training for Text-to- Image generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 17907–17917.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' openaccess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='thecvf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='com, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2, 4 [33] Minfeng Zhu, Pingbo Pan, Wei Chen, and Yi Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' Dm-gan: Dynamic memory generative adversarial networks for text- to-image synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF con- ference on computer vision and pattern recognition, pages 5802–5810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' openaccess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='thecvf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content='com, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'} +page_content=' 2 10' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf'}