diff --git "a/CtE1T4oBgHgl3EQfWAR_/content/tmp_files/load_file.txt" "b/CtE1T4oBgHgl3EQfWAR_/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/CtE1T4oBgHgl3EQfWAR_/content/tmp_files/load_file.txt" @@ -0,0 +1,874 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf,len=873 +page_content='Cinematic Techniques in Narrative Visualization Matthew Conlen Our World in Data matt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='conlen@ourworldindata.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='org Jeffrey Heer University of Washington jheer@uw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='edu Hillary Mushkin California Institute of Technology hmushkin@caltech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='edu Scott Davidoff Jet Propulsion Laboratory California Institute of Technology scott.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='davidoff@jpl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='nasa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='gov ABSTRACT The many genres of narrative visualization (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' data comics, data videos) each offer a unique set of affordances and constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' To better understand a genre that we call cinematic visualizations—3D visualizations that make highly deliberate use of a camera to convey a narrative—we gathered 50 examples and analyzed their traditional cinematic aspects to identify the benefits and limitations of the form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' While the cinematic visualization approach can violate traditional rules of visualization, we find that through careful control of the camera, cinematic visualizations enable immersion in data-driven, anthropocentric environments, and can naturally incorporate in- situ narrators, concrete scales, and visual analogies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Our analysis guides our design of a series of cinematic visualizations, created for NASA’s Earth Science Communications team.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We present one as a case study to convey design guidelines covering cinematography, lighting, set design, and sound, and discuss challenges in creating cinematic visualizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 1 INTRODUCTION Within narrative visualization [57], researchers have identified gen- res (such as data comics [3] and data videos [1]) that help better unpack and situate their specific application and the features that they employ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' cinematic visualizations embed data into a three- dimensional, time varying scene, utilizing one or more cameras to direct the relationship between a viewer and the scene to tell a dra- matic data-driven story.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' This cinematic approach is different from the one typically used in information visualization, where graph- ics are reduced to a minimal form, incorporating only essential elements like axes and data-driven marks [64].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Cinematic visual- izations are more maximal: non-data marks are not compressed or reduced, instead entire digital worlds are built up around data points and included in the visible frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' This technique allows viewers to feel present in locations augmented with data-bound objects, known as data visceralizations [41].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Narrative documentary visual- izations [10] can be produced through the careful editorial direction of the cinematography, editing, mise-en-scène, and sound [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Through an analysis of 50 existing cinematic visualizations, we identified four salient techniques (in-situ narrators, resolution of scale, anthropocentric perspective, and story-driven cameras) that cinematic visualizations employ to dramatically engage their au- dience through emotionally resonant data-stories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We show how these techniques are used throughout the examples analyzed, dis- cuss constraints associated with them, and reason about why cine- matic visualizations may be effective despite the known pitfalls of 3D visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Using the lessons learned from this formal analysis, we produced a web-based article containing a series of cinematic visualizations relating to climate change, which was published by NASA’s Earth Science Communications team 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We contribute the design process for one of these visualizations as a case study, presenting design artifacts that were created during our process (both successful and unsuccessful), and provide concrete guidelines for designers of cinematic visualizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Our analysis and design artifacts are available at https://cinematic-visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='io/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2 RELATED WORK Narrative visualizations are used to improve memorability [7, 8], to instill empathy or emotion [9], to frame a message [33], and to improve engagement [19, 28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Segel & Heer [57] provided an ini- tial characterization of the design space of narrative visualizations, which was later elaborated to include additional techniques [60].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Hullman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [34] focused on the role of sequence in narrative visualization, characterizing a set of transition types and other high level strategies for sequencing visualizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Tools have been cre- ated to support narrative visualization authoring [2, 11, 18, 56], and a small number of empirical evaluations of narrative visualizations have been conducted [9, 19, 46, 69].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Further work has investigated specific genres of narrative visualization such as data comics [3], and new genres have emerged beyond Segel & Heer’s initial set, such as “scrollytelling.” Here we add to the ongoing conversation around narrative visualization by identifying another such genre: cinematic visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Kosara & McKinley [39] identified the op- portunity for narrative visualization researchers to learn from other disciplines that engaged heavily with storytelling and multimedia, this paper draws on film art scholarship, incorporates a formal system of cinematic style into our analysis, discussion, and design of cinematic visualizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='1 Data Videos & VR Data videos were included in the initial set of genres put forth by Segel & Heer [57] and first studied closely by Amini et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Not all data videos are cinematic visualizations (for example, we do not consider a video consisting of a sequence of two-dimensional infographics to be cinematic), and not all cinematic visualizations 1https://climate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='nasa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='gov/news/2933/visualizing-the-quantities-of-climate-change/ arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='03109v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='HC] 8 Jan 2023 , , Conlen, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Figure 1: The Dangers of Storm Surge (CV42) is a mixed reality video produced by the Weather Channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The video opens with a close up shot of a news anchor wearing a rain jacket, standing in front of a house (1A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' There are audible sounds of rain under the anchor’s voice and water dripping down the windows of the house.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The camera pulls back revealing that the live anchor is being composited into a 3D scene of a suburban neighborhood during a storm surge (1B).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' There are very few data points actually encoded as visual elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The piece simply shows water rising from zero, to three, to six, to nine feet (1C-D) as the anchor narrates with details in reference to the danger of storm surge associated with hurricanes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' are data videos (for example one in which the visualization is deeply tied to the text of an interactive news article).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' While Amini et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' were primarily concerned with the narrative structure and attention cues of data videos, we additionally consider the visual and auditory style of cinematic visualizations in detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Under our formal style system, our analysis of editing is most closely related to Amini’s work, however that is only one of four dimensions we consider.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Bradbury & Guadagno [10] studied viewer preferences in docu- mentary narrative visualization (a subgenre of data videos in which data is presented using the techniques of documentary film), and found that audiences may prefer when documentary data videos include voice-over narration and on-screen narrators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We build on their analysis of the use of narrators and narration, in particular during our discussions of in-situ narrators that interact with data- bound objects digitally rendered into the space around them, and of the use of sound in cinematic visualizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Video producers have extended the traditional documentary visualization format to enable interactivity such as user selected paths through the content and manipulable graphics [29, 63].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Immersive data stories [35] have been discussed within the emerging field of immersive analytics [44] and have been shown to allow viewers to examine data at multiple scales, support im- mersive exploration, and create affective personal experiences with data [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [41] introduced data visceralizations, where physical quantities are visualized in 3D virtual reality scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' This paper helps to bridge the gap between data visceralization and nar- rative visualization by showing how cinematic techniques can be used to create author-guided narrative visualizations using data vis- ceralizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Cinematic visualizations similarly attempt to immerse viewers and create emotionally resonant experiences, although in contrast to immersive visualizations they are typically viewed on a standard 2D screen with limited (or no) user control of the camera.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' There are several toolkits for creating immersive data visualizations and data stories on augmented reality devices [21, 55, 59].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='2 3D Computer Graphics Animation [62] has been a partner discipline with visualization for some time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Classic principles of animation [61] have been adapted for digital usage [40] and subsequently for information visualiza- tion [30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' With realistic camera models [38] and improving render- ing capabilities [20] digital animation became a tool to create Holly- wood films [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' While 3D graphics have been used in visualization to limited success, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=', to display hierarchical information [17], the use of 3D graphics in information visualization is often avoided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' A broad body of research documents potential pitfalls, including that volume is not a perceptually effective encoding channel [16], and that 3D projections introduce distortion and occlusion [67].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We find that designers of cinematic visualizations may intention- ally use suboptimal encodings in support of more visceral [41] and emotionally resonant [28] graphics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The use of 3D does find more regular application in scientific visu- alization [4, 65], including its use in storytelling [22, 43].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Borkiewicz used the term cinematic scientific visualization [6] to refer to a class of narrative data videos that focus on scientific data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Here we use cinematic visualization in a similar way but do not restrict the data to be strictly scientific or inherently spatial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Unlike Borkiewicz, our description encapsulates visualizations which are not embed- ded in films, but may be, for example, displayed as an animation accompanying a news article.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='3 Film Art Bordwell and Thompson [5] define narrative and style as the two major formal systems of film.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' While prior work has examined se- quence [34] and narrative structure & attention cues [1] in data videos, we observe that cinematic style has far less visibility in the critical vocabulary of data visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Style plays a crucial roll in filmmaking, enabling directors to “confirm our expectations, or modify them, or cheat, or challenge them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='] A director directs not only the cast and crew.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' A director also directs us, directs our attention, shapes our reaction.” [5] This paper brings Bordwell and Thompson’s formal system of cinematic style into the world of data visualization, and uses it to examine how narrative visualizations borrow techniques from cinema while departing from many of the traditional practices advocated by visualization research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Style consists of four features, which together make up a film’s style, each now briefly described.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Mise-en-scène refers to every- thing that is seen in the frame, including lighting, actors, objects, backdrops, and so on [27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Cinematography refers to the use of the Cinematic Techniques in Narrative Visualization , , Figure 2: (A) In [REALISTIC] Elephant rocket fuel - Saturn V (CV29), a model Saturn V rocket takes off, however, instead of flames exiting the bottom of the spacecraft, elephants are expelled, the number of elephants represents the corresponding mass of fuel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' This video may not make for a particularly effective visualization in terms of conveying precise quantities, but the style successfully uses humor in order to call attention to the fact that rocket launches use a quantity of fuel so great it is appropriate to measure it in terms of dozens of elephants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' (B) In Here are 120 million Monopoly pieces, roughly one for every household in the United States (CV6) by the New York Times the pile of Monopoly pieces is first seen from a far, before the reader scrolls down the page to trigger the camera zooming in to the very top of the pile, dramatically revealing what a disproportionately small portion of families provide most political funding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' camera, how shots are composed and framed [26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' By placing ele- ments at specific locations within the frame, they can be perceived either as the subject or the background of the image [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Editing is the composition of multiple pieces of footage in time or space, creating transitions between perspectives and scenes [54].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Sound is the audio used, whether it be music, voice over, or sounds from characters or objects on screen [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Our analysis of cinematic visu- alization identified techniques along these dimensions of style that designers can use to enhance their presentation of data narratives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 3 CINEMATIC VISUALIZATION SURVEY We collected cinematic visualization to analyze by surveying liter- ature on narrative visualization [1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 6,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 34,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 35,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 43,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 57,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 60],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' browsing the information visualization awards website Information is Beau- tiful [45] and the PacificVIS storytelling contest [50],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' and searching for news articles,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' blog posts,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' conference talks,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' and videos which were described using combinations of the keywords cinematic,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' data,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' data video,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' dataviz,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' datavis,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' visualization,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' news,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' newsgames,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' immer- sive,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' mixed reality,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 3d,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' and video.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We searched the portfolios of the creators of the visualizations found initially and their collaborators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' A full list of the cinematic visualizations can be seen in Figure 7 in the appendix of this paper;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' we refer to these studies by identifiers throughout the paper (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=', CV4 refers to the fourth example in the table).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Our analysis considered 50 cinematic visualizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' While the corpus is not exhaustive, the examples expose the variety of media (interactive news articles, YouTube videos, and TV segments) which cinematic visualizations occupy and the messages that they deliver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The examples visualized a broad range of data types, in- cluding datasets both with and without physical and geographic dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Rather than empirically evaluate specific design patterns utilized in the visualizations, we turn to the means of understanding plot devices [57], sequencing [1], and film style [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We analyzed the style of each example along the dimensions of mise-en-scène, cine- matography, editing, and sound using the 4-step analysis process described by Bordwell and Thompson [5], a canonical method of film analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' For each example we first identified the main com- municative goals of the visualizations, and then studied the salient techniques applied within the mise-en-scène, cinematography, edit- ing, and sound which supported these narrative goals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We then used iterative coding to categorize the salient techniques used across the examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Usage of these techniques are shown in Figure 7, for example we recorded many ways in which a viewer’s attention is guided (through color, light, annotations, and narrators in the mise-en-scène) and use of cinematographic techniques like point- of-view perspective and user-controlled cameras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The table shows that the medium of the cinematic visualization has some impact on the techniques used, for example cinematic visualizations embed- ded in online articles rarely use sound, but often utilize user-paced segments, while those presented as videos make heavy use of sound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' A B Here are 120 million Monopoly pieces, roughly one for every household in the United States.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Just 158 families have provided nearly half of the early money for efforts to capture the White House.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=', , Conlen, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Figure 3: VFX Artist Reveals the True Scale of the Universe fea- tures a live-action narrator alongside scaled-down 3D mod- els of celestial bodies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='1 Design Techniques Through this analysis we identified salient recurring techniques that were frequently applied to support the communicative goals of the visualization, including the use of in-situ narrators, anthropocentric perspective, resolution of scale, and story-driven cameras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In-situ narrators mediate interactions with diegetic data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Perhaps the most novel technique that we identified in cinematic visualizations is the use of in-situ narrators, in which the mise-en- scène contains a character that interacts directly with on-screen, diegetic data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='2 In contrast to traditional documentary visualization narrators who might participate from off-screen (“voice of god”) or refer to data visualizations rendered as two-dimensional holograms or composited over top the video [10], in-situ narrators are under- stood by the viewer to be able to see and interact with the diegetic data either through the use of superimposed data visceralizations 2Something which is diegetic exists in the same universe as the characters on screen;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' we use the phrase diegetic data to refer to data-driven elements which are part of—rather than composited over—the scene shown in the frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' (CV35, 40, 42, 43) or, in one case, data physicalization [37] (CV41).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' This (typically) mixed reality environment serves an important role for narrative visualization, allowing the on-screen narrator to mediate interactions between the audience and the graphics, letting them provide additional context and push the storyline forward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' These narrators, essential components of the mise-en-scène, can also help concretize a visualization’s anthropocentric perspective, reinforcing the idea that data is being displayed at a human scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In The Dangers of Storm Surge (CV42), one exemplar of this technique (Figure 1) produced by the Weather Channel, a news anchor wearing a blue jacket explains the dangers associated with flooding due to storm surge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The graphics are coordinated with the narrator’s script and appear to respond to his dialogue, the composition of the frame inviting comparison between the man and the height of the water.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The narrator is the primary subject from the start of the clip, positioned centrally in frame and maintaining focus due to visual cues like his bright blue coat, the circular platform upon which he stands, and the shot composition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' To call attention to the water’s height at certain key moments, a brightly colored annotation is projected onto the crest of the surge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' An anthropocentric perspective transports viewers and enables drama.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' One notable aspect of cinema is how the camera is able to transport the audience into the scene: people watching sus- pend disbelief [24] to allow themselves to wholeheartedly imagine, or “believe”, that they are in the scene, seeing things through the camera lens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' That is, the camera’s perspective becomes the viewer’s point of view, they are one and the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The height, angle, and distance of a camera in relation to objects in the scene all play a role in how a viewer will interpret and respond to the frame that they ultimately see [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' When a camera is placed high above a setting, the viewer feels like they are also high above it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' When a camera is placed at eye level, a viewer feels as if they are standing there watching the subject.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' For example, both CV1 and CV26 utilize unit visualizations and concrete scales to visualize quantities in relation to the size of Manhattan, but each uses perspective to impact the viewer’s experience in a different way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In CV1 the data being dis- played (plastic bottle usage) is not directly related to the locations being used as concrete scale referents, and an overview shot is used, letting the viewer absorb the scale of the data rather than the details and textures of the city itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In contrast, CV26 begins with a shot from a camera placed at eye-level, looking at several of the city’s ubiquitous yellow taxis, transporting viewers to the city at street level, and forcing them to reckon with the data being displayed (New York City’s annual green house gas emissions) in a much more visceral way [41].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Some cinematic visualizations place the camera perspective somewhere that is humanly impossible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' However, if the audience suspends disbelief, the camera can carry the viewer through these otherwise inaccessible spaces, for example, CV12 shows an anima- tion of the Cassini spacecraft as it orbited and eventually crashed into Saturn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Choice and Chance (CV11), visualizes the events of the 2016 Pulse night club shooting in Tampa Bay, positions a camera looking “through” the roof of a nightclub.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Because the scene is shot using a digital model instead of a real location, the roof of the club can simply be removed and problems of occlusion go away.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Changing perspectives can also shift the subject of the scene or add emotional content, for example, when the camera moves to A SUBSCRIBE B c Rige!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Cinematic Techniques in Narrative Visualization , , Figure 4: New York City’s greenhouse gas emissions as one-ton spheres of carbon dioxide gas, a cinematic visualization produced by Carbon Visuals and released online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The cinematic visualization uses a variety of different camera views, along with stark colors to guide viewers through an explanation of the scale of the city’s greenhouse gas emissions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The number of instances of the blue sphere is driven by the rate of emissions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' As this number grows the city buildings serve as a concrete scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' reveal something that wasn’t already in the frame, the audience experiences seeing it for the first time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In Choice and Chance the camera moves to different vantage points throughout the model as the story progresses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The camera remains in an overview shot for the majority of the article, but moves to ground level at the climax, elevating the intensity of the shot by placing the viewer into the perspective of a bystander.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Author-defined camera trajectories can be played, paused, and (lightly) modified by viewers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The cinematic visualizations that we analyzed tended to use author-driven narrative structures [57], with most user interactions consisting of the user clicking or scrolling to trigger the visualization to continue to the next stage (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=', CV2, 5-17, 21-22).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Operationally, this requires animating the position and orientation of a digital camera model along a track specified by the author, and has been used heavily by cinematic visualiza- tions embedded in articles (16 out of 22).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The other way in which (constrained) interactivity was employed was allowing the manipu- lation of 3D models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In most cases this means the user can position the camera at a particular location around the model (see CV17 for a stereotypical example).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' These models might be scientific (CV13,17) or cultural (CV5) objects that would be otherwise inaccessible to the audience viewing the visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' It is common for orbital cameras to be used, constraining the camera’s focus to remain on a particular object of interest while allowing the user to exercise control over viewing angle and zoom level (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 7D).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Cinematic visualizations that support these interactions must be rendered in real-time, limiting the fidelity at which the models may be rendered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Visualization techniques are combined toward resolution of scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' While we traditionally think of 3D graphics as ineffective for encoding quantities [16], a recurring theme in our examples is the use of 3D graphics to visualize and communicate quantities of a massive scale (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=', CV1, 6, 8, 26-28).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Quantities at a scale beyond what we experience in daily life (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' hyperobjects [47]), like amount of carbon dioxide emitted from NYC annually (CV26), may be es- pecially difficult for people to picture because we rarely, if ever, interact with quantities of such a size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Cinematic visualizations can convey a quantity of scale in a concrete and affecting way by using cinematography to establish the viewer’s point of view from the ground, a position which often serves as the implicit zero point of a y-axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We observed that several visualization techniques are naturally expressed in cinematic visualizations, including data vis- ceralizations [41], unit visualization [51] and concrete scales [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' For example, in CV27 the viewer sees a city park, including trees, people standing in a grassy field, and a ten meter tall blue sphere representing the actual size of one metric ton of CO2 (data vis- ceralization).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' As the scene progresses, many more spheres appear, each representing one metric ton of CO2 (unit visualization), until so many appear that the camera must zoom out, above the park, observing the growing pile of spheres in comparison to the city buildings (concrete scale).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Objects which are used as backdrops—for example a city skyline (CV11) or parked car (CV42, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 1)—may serve double duty as concrete scale referents and contextual elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The use of 3D graphics affords designers the ability to use concrete scales (CV1, 26) and visual analogies (CV29, 36) to (re-)contextualize the size of objects, and digital sets are constructed to facilitate comparisons that are impossible to make directly in the physical world (CV1, 27) and use point-of-view perspective to impart a visceral sense of magnitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The visual medium is rich with possibilities for analogy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' For example, in [REALISTIC] Elephant rocket fuel - Saturn V (CV29, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2), designer Maxim Sachs renders the launch of the Saturn V rocket, except that the rocket expels elephants behind it as it travels, rather than exhaust.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The elephants represent the mass of fuel that is being expended.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' By juxtaposing these images, Sachs is able to re- frame an abstract quantity of rocket fuel in terms that people may have more familiarity with, and do it with a sense of humor that may make the visualization overall more memorable or engaging for its audience [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In a more typical case, the narrator of CV40 asks the audience to imagine if Earth were the size of a tennis ball, and then, using this new scale, shows the relative size of different planets, moons, and stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' These planets are compared against one another, rendered into real-world footage including a narrator who provides guidance and relevant facts about the celestial objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' , , Conlen, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Figure 5: How Much is a Gigatonne?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' shows one gigatonne of ice in Central Park, New York.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' A digital set (A) is designed including multiple cameras, lighting, and data-driven and contextual elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Footage from the various cameras is composed to create the final sequence (B-E).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' This was one of several videos that we developed for an article published on NASA’s climate website.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' View the full videos at https://cinematic-visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='io/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' They are shown embedded into several settings, for example an office, a Los Angeles street, and the New York City skyline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='2 Constraints The time-based format does not support a high data density.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Traditional information graphics often present a data-dense display with minimal “non-data ink” [64] to remove possible distractions and optimize the display for tasks such as value look-up and com- parison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In some cases, designers may choose to add additional illustrative features to increase the memorability of the visualiza- tion [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In contrast, cinematic visualizations utilize diegetic data, embedded in a three dimensional scene with other elements which contextualize the scene (see CV35 for a striking example).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In cine- matic visualizations (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' CV40,42) the elements surrounding the data fulfill a dual role as both data and non-data ink;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' they add spatial presence to the visualization [12], supporting a sense of transportation to the virtual world for viewers, while simultane- ously serving as guides and axes, points of reference for concrete scales [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Rather than densely packing data, we see that cinematic visualizations often only show one or a few data points in the frame, favoring to include additional contextual elements that help add emotional resonance to the data-story being told.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Designers trade-off between perceptual effectiveness and dramatic narrative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Visualizations that employ 3D graphics are often ineffective perceptually.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' These graphics may use sub-optimal encoding channels like volume and can further bias judgement through distortion and occlusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Cinematic visualizations are not appropriate when the task is centered around value judgements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Instead, we see cinematic visualizations effectively used when a rough estimate of values is sufficient and the precise value is not of central importance (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' CV29).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Many of the cinematic visual- izations that we analyzed use a volume encoding to display data (CV1,6,26,27,35).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Volume is a less effective encoding channel com- pared to position and may cause the audience to misestimate the true quantity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' This trade-off may be acceptable depending on the data being presented and the precision with which the author hopes it will be apprehended.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 4 CASE STUDY: HOW MUCH IS A GIGATONNE?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We collected and studied the aforementioned cinematic visualiza- tions while exploring designs to support the communication ob- jectives of NASA’s Earth Science Communications Team.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Climate change is a complex, multi-faceted issue of global importance [49] and the team is tasked with maintaining climate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='nasa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='gov, a website that tracks vital statistics about Earth’s climate, and delivers up- dates about global warming to a diverse global audience of millions of readers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The team uses traditional information graphics [48], as well as narrative visualizations (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=', [53]), to highlight how scien- tists know that anthropogenic global warming is truly happening, what changes have taken place in Earth’s climate so far, and why it is an important topic for readers to understand even if it does not seem to be affecting them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' However,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' the team sought data-driven ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='stories that more viscerally engaged their audience and connect ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Digital set design ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content="Cam1 (God's eye view) " metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='D ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content="Cam2 (bird's eye view) " metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Lighting: Global Illumination ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Data-driven element ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Geographic elements ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Cam3 (point-of-view) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Texture from satellite images ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='A ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Rendered output ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='B ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Central Park ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='D ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='C ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='ewYorkCit ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content="God's eye view (Establishing shot) " metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Point-of-view (Establishing) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Point-of-view (Initial action) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Medium-long shot (Peak)Cinematic Techniques in Narrative Visualization ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=',' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Figure 6: We explored many different designs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' these were left on the cutting room floor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The designs were dropped for reasons including poor perceptual effectiveness (A-C), locations too small for the scale of the data (D-F), and designs too illustrative and not physically accurate enough (G-H).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' It was particularly difficult to identify locations that were broadly recognizable from a 3D reconstruction but also suitable to server as a concrete scale referent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' the planetary scale data of climate change to a human scale that readers can readily understand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Within the domain of climate change communication is a range of research investigating how to effectively communicate the latest science to a broad audience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' High level principles of climate change communication have been synthesized by the Center for Research on Environmental Decisions [58].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We think cinematic visualizations are well suited to satisfy principles “Get Your Audience’s Attention“ and “Translate Scientific Data Into Concrete Experience.” Here we describe how our work creates connections between ongoing investigations in narrative visualization, computer graphics, and film art to achieve this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Guided by editorial priorities set by NASA’s Earth Science Com- munication team, we produced an article consisting of a several cinematic visualizations to communicate massive quantities related to climate change.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We endeavoured to make them interpretable and meaningful to a broad public audience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' These visualizations were eventually published to an audience of millions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Here we describe our design process to create cinematic visualizations, identifying a general workflow of use to practitioners who wish to create this type of visualization themselves, and to tool-builders who wish to provide better support for authoring cinematic visualizations in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' As with visualization production in general, these steps are not necessarily linear;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' rather, the process is iterative and error prone, and may require going back to earlier steps if it becomes apparent that a design is not working.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We experienced many failed attempts (see Figure 6) before arriving at our final designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='1 Pre-Production Narrative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Quantities of ice loss are measured in gigatonnes, a unit of mass corresponding to one million metric tons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Statistics about ice loss are often reported using this unit, for example Earth’s polar ice caps are losing about 426 gigatonnes of ice per year, at the time of writing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The scale of the unit here hides the fact that 426 gigatonnes is a massive amount of ice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Our goal was to provide a visualization that would allow our audience to better interpret these statistics going forward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We collected statistics on ice loss in Greenland and Antarctica (the two ice sheets) over the course of significant periods, such as the amount of ice lost between 2002- 2017 when NASA’s Grace satellite was actively observing the polar ice caps, or since the start of the 20th century (5,000 and 49,000 gigatonnes, respectively).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We settled on cinematic visualization because it is a natural fit for the use of concrete scales, we wanted to draw people’s attention, there is a relatively small amount of data that we are showing, and we wanted to display the data in a context that conveyed corporeal urgency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Given the affordances identified in Section 3, a cinematic visualization was an appropriate choice for our task of visualizing quantities related to climate change in a way that would capture the attention of our audience and allow them to comprehend the data in a concrete way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We ultimately chose the form factor for our visualization to be an interactive article containing a series of short cinematic visualizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The visualizations were embedded as pre- rendered videos, which could be loaded dynamically, allowing for a certain amount of interactivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Depending on the use case, one must determine whether real-time rendering is needed or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Using real-time rendering limits the level of photorealism [52], but enables another level of interactivity, letting the user control the camera and interact with elements in the scene (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 7D).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We intended the narrative structure of our visualization to be largely author- driven [57], and decided that real-time rendering was not required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' After determining that a cinematic visualization was appropri- ate, we began outlining possible scripts and creating storyboards in which we sketched ideas for locations, cinematography, and se- quencing of shots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We first sought to identify locations that would serve as effective backdrops, allowing people to gain a concrete understanding of the size of data in familiar locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We consid- ered natural locations like the Grand Canyon, Monument Valley, Mt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Everest, and Uluru, urban environments like Houston, New York City, San Francisco, and St.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Louis, and other man-made sites like football stadiums and the Hoover Dam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Within each of these environments we created sketches to help determine the camera placement, mise-en-scène, data, and annotations that the visualiza- tions would require, and wrote rough scripts to define the narrative structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' While we wanted to place data in a variety of different envi- ronments so that our diverse audience would be able to connect, 2000 1979 2009 Carbon Emissions 7021 M, , Conlen, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' ultimately many of these locations were not used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' See Figure 6 for examples of some of the locations that were not able to support both focus and context at an anthropocentric perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The final article consisted of videos visualizing one, then 5,000, then 49,000 gigatonnes of ice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The videos were embedded throughout the text of an article which provided context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In the first and last videos the user could click to choose to play videos displaying the relevant quantity of ice in different locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Here we look closely at the design process for the first video, showing one gigatonne of ice in Central Park, New York City.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='2 Principal Photography With the storyboards and scripts ready, the source footage that would make up the final video needed to be created.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We chose to use Blender for this process, which provides both an interactive GUI-based interface as well as a Python API that allowed us to load, transform, and bind data to objects in a 3D scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We created renders for many different scenes, although ultimately ended up using a small number of them in our published pieces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Mise-en-scène.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The elements that constitute the mise-en-scène of a cinematic visualization need to be created and arranged.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Be- cause many of our scenes take place in real-world locations, we were able to utilize existing open data sets to import geographic data, including 3D models of buildings and terrain data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In addition to elements derived from real-world locations, we added elements which would be parameterized by data, for example the large block of ice placed in Central Park (Figure 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' After the models have been created, they need to be assigned a material, which (along with lighting) will determine how they appear in final renders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We chose to use a flat shading for the buildings and other environmental elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' This gave these elements less visual weight while still al- lowing them to be easily identifiable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We considered using a similar flat style for the data elements, but ultimately decided to add a more photorealistic ice material which would allow the data to stand out against the buildings and reinforce the idea that we were showing a concrete amount of ice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' While many of the examples that we saw utilize a studio lighting setup to control shadows and reflection, we opted to use simple global illumination to emulate the sun shining in our outdoor scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' This meant our lighting was realistic for the location and the setup was quite simple, but we were limited in our ability to use lighting as a tool to guide attention, as we saw it used (for example) in CV15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' With the scene constructed, the next step was to bind the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' This was the point at which we realized that many of the set lo- cations were not going to work with the data we were hoping to visualize (“data changes everything” [66]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' For example, a gigatonne of ice placed in a football stadium (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 6D) would extend over 200 kilometers into the sky, making it difficult to view both the diegetic data and the stadium itself simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' For our visualizations we were simply assigning the dimensions of a primitive 3D object based on calculations related to the mass of ice melt over specific periods, along with the density of ice, in order to create blocks of ice which were physically representative of the quantity lost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Cinematography.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' After we incorporated our data into the scene it was time to add animation and cinematography.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Blender supports a keyframe-based animation system which made it simple to add basic animations to the size and locations of elements in the scene, as well as the position and perspective of cameras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Working off of the storyboards that we had created, we placed cameras (shown in Figure 5) that would be physically realistic and familiar: we use three cameras, one a human point-of-view, one a bird’s eye view (as if it were taken from a helicopter circling the city), and one a "god’s eye view" taken from the perspective of a satellite overhead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The satellite camera allowed us to create an initial establishing shot, while the other cameras provided views that supported a ground- level view as well as an overview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' When sequenced together, these camera perspectives allow us to present focus plus context [13] to the viewer, and support our narrative goals [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='3 Post-Production Once the source material was created, we needed to edit it to form a coherent narrative, for example by combining multiple videos in sequence, adding annotations on top of the video to add context, and adding sound to add presence, guide attention, and provide details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Any visual effects must be added at this stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' For example, in the case of embedding digital data objects into physical footage of a narrator, a “match moving” process to align the digital and physical scenes would need to be performed [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Editing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We combined footage from multiple cameras, compos- ing shots into a narrative structure, starting with establishing shots, then initial action, peak, and finally release [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The sequence of images is important to advance the role of narrative, pacing, and mood.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Narrative visualizations often include annotations to provide additional context and explain to viewers what it is they are seeing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In the case of cinematic visualizations these annotations can be composited over the source footage using standard video editing software.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Some examples that we saw embed annotations directly into the 3D scene itself, which requires them to be embedded in the source footage directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We chose to composite annotations rather than include them “in-situ” as it facilitated more rapid iteration dur- ing the editing process, allowing us to change the timing, location, and content of annotations, without needing to re-render any of the source footage — a potentially time-consuming process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Sound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In our work we ultimately did not use audio, instead opting to embed the videos in a larger text article, which would serve to provide viewers with context for the visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' This is a limitation and something to be explored more in future work, as audio can be a useful tool in cinematic visualization to set tone and drive narrative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='4 Publication Once the article was completed and approved for publication, it was posted to NASA’s climate website.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We did not collect detailed metrics on how readers interacted with the videos on the article itself, but can see how users responded to posts on the NASA Climate Facebook, Instagram, and Twitter pages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' These posts— which contained a link to the article and (in some cases) directly embedded the video set in New York City—were collectively viewed tens of thousands of times,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' received thousands of engagements (likes,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' comments,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' shares),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' and the article was subsequently shared by other organizations such as the United States Department of Cinematic Techniques in Narrative Visualization ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Agriculture and the World Meteorological Organization,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' as well as by individual scientists and meteorologists.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Across all of the social platforms users left 94 direct comments, with topics ranging from positive (for example, some explicitly expressing that they like this type of visualization “We need more of these types of comparisons in the media”, “This is an amazing visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Thanks NASA!”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=', or asking for similar visualizations of different quantities “It would be very interesting to see this illus- tration but with the predicted sea level after all the ice in Greenland and Antarctica melt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Can you show that?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=') to concern about the data being visualized (“Oh my God.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Come to our aid.”, “Thanks for helping us comprehend the enormity of this sad news!”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=', a GIF of a cartoon rodent crying) to climate change denial (“Where’s your proof”, “Wow, as much as 2 millimetres.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Measured by satellite too”).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The comments were distributed roughly uniformly across the three types (positive attitude toward visualization, concern about climate change, and climate change denial), but varied heavily across plat- forms, with users on Facebook expressing concern or denying that there is a climate problem, users on Instagram leaving both positive and concerned comments, and users on Twitter expressing a range of concern, denial, and a positive attitude toward the graphic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 5 DISCUSSION Cinematic visualizations can engage viewers with dramatic and visceral presentations of data, highlighting particularly important data points, and presenting an author-guided tour through data embedded in a relevant context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' On the other hand, they may be poor choices for communicating large amounts of data and are not optimal in terms of perceptual effectiveness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' If a cinematic visualization is appropriate, it will require a broad range of skills — such as cinematography, narrative, 3D modeling, video editing, and possibly acting — and a time-consuming iterative design process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='1 Challenges of Creating Cinematic Visualizations While cinematic visualizations can capture the attention of their audience and help viewers relate to the data in a concrete way, they can be challenging and time-consuming to produce.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Here we discuss some of the challenges inherent in creating an effective cinematic visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' One of the most apparent difficulties of cinematic visualization is the potentially overwhelming size of the design space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Works in this genre typically use three visual dimensions, plus time and sound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The methods that allow us to analyze and critique cinematic visualizations (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=', [5]) do not necessarily help us to create them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' That is, they are difficult to use generatively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' While information designers are familiar with the attention to detail that is required when placing objects in a frame in order to achieve an effective visual hierarchy, in cinematic visualizations there are also objects outside of the frame that affect the style and tone of the visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' For example, the placement of the camera in relation to the subjects, the focal length of the camera, and the placement and strength of light sources are all instrumental in creating a shot which can easily be decoded by viewers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' There is a diversity of tasks that need to be completed in order to create a cinematic visualization, each requiring a separate set of skills.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' For example, in addition to skills required for traditional visu- alization (data analysis, transformation, and visualization) and nar- rative visualization (understanding audience, storytelling, graphic design), cinematic visualization will often make use of animation, cinematography, lighting, motion graphics, 3D modeling, sound design, video editing, and (sometimes) acting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The skills that make one a good 3D modeler are not necessarily the same skills that make one a good storyteller, and so graphics of this type often require a diverse team to create.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Furthermore, for ray-tracing renderers, there is a large gap between prototypes and final rendered output, challenging the iterative design process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='2 Considerations for Cinematic Visualization Creators While cinematic visualizations share many of the same design goals of more traditional narrative visualization (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=', guide the viewers’ attention), the way in which these goals are operationalized differ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Here we highlight ways that these design goals were operational- ized across the four dimensions of style, both in our own work and in the examples analyzed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' For a full breakdown of the techniques used in each example, see Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Mise-en-scène.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Objects’ sizes, colors, shapes, textures, and place- ment in relation to one another can all be used create an effective visual hierarchy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' For example, to guide a user’s attention in a cin- ematic visualization, a designer might choose to use lighting to cast a glow around an object (CV11), or change the object’s color (CV2, CV13) so that it stands out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In How Much is a Gigatonne, the ice’s large size, color, and shine draw a viewers attention to it in contrast with the surrounding buildings, which are smaller, grayscale, and matte.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The mise-en-scène is designed both to com- municate information—including using narrators (CV42), diegetic data (CV35), and visual analogies (CV6)—and to add dramatic affect (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' CV11, CV40).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Cinematography.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Perspective can be used both to drive narra- tive and to set tone, as well as to provide focus plus context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The position (CV26), angle (CV28), or focus (CV2) of a camera can be modified so that the object becomes the focal point of the frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' To help narrow the large space of possible cinematic visualizations, and make effective use of the frame, designers of cinematic visual- ization may study how shots are composed and sequenced in films.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In How Much is a Gigatonne?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=', we rendered footage from multiple cameras in order to create close-up, medium, and wide shots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Some cinematic visualizations enable limited user-control of the camera, for example letting the user trigger the next stage of animation (CV9) or rotate their perspective (CV13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Often the camera needs to track a particular object in the scene (CV12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' If this object is in motion you may need to set your camera to track it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Planning the path of the camera so that the object of interest is not occluded by other objects and so that motion is smooth and visually pleasing can be difficult.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' This may be done algorithmically [15, 68] or by hand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Editing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Putting the footage into a particular order progressively reveals information to convey the authors’ intended message.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Edi- tors may use footage from one camera at one location (CV29), or multiple cameras at multiple locations (CV40).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The editing tech- niques used in data videos—particularly the use of establishing, , , Conlen, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' initial, peak, and release shots—has been studied in more depth by Amini et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Similar to movie makers, creators of cinematic visu- alizations may use the technique of storyboarding to prototype and communicate their scenes in a lo-fidelity form before endeavouring on the time intensive task of 3D modeling and rendering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In How Much is a Gigatonne we use establishing shots to situate the viewer before initiating action from the perspective of the ground level (an anthropocentric perspective), before cutting to the vantage point of a helicopter, using the city skyline as a concrete scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Sound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Audio can set tone (CV25), cue attention (CV28), and impart additional details through narration on (CV40) or off-screen (CV45).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Music (CV29) and ambient sound (CV26) can affect the tone of the visualization and add presence to the scene, for example CV29 uses combines techno music and a visual analogy of of the weight of rocket fuel (measured in elephants) to create a humorous juxtaposition which may make the visualization more approachable and less dry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' CV26 uses diegetic sound (taxi cabs honking) to rein- force the anthropocentric perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In How Much is a Gigatonne we did not use sound (neither did most of the other visualizations that we analyzed which used an “article” format), but effective use of both the visual and auditory channels has been shown to lead to improved outcomes in multimedia learning contexts [42].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='3 Implications for Authoring Tools As cinematic visualization is a newly emerging genre, there is rel- atively little tool support to facilitate authoring of this type of visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Instead, creators turn to general purpose 3D software that was designed to support a breadth of use cases such as architec- tural design, modeling, and narrative animation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' These tools, while powerful and expressive, may overwhelm users with complexity that is incidental to the task of creating a cinematic visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' For example, objects are assigned materials which are powered by low-level shader code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' One can not choose, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=', between “realistic” or “cartoon” aesthetics but instead must compose low level shader components to achieve the desired look.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' These tools do not support the basic building blocks of visualiza- tion, such as easily ingesting data and binding data values to objects in a scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Instead, users must write custom scripts to handle any such task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The interfaces in general are multi-modal: most 3D mod- eling work is done directly through a GUI, but data-driven work needs to be done in code;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' shaders are described using a directed graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Authors are forced to context switch between drastically different environments, arguably making it harder to iterate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The task of 3D rendering can be computationally intensive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' De- pending on the output resolution, complexity of the scene, and computing power available, a short (30 seconds) animation could take several hours to render.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' There is a large gap between the fidelity of the final renders and what a designer sees while con- structing the scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' This setup makes it important to create test renders frequently, but makes it hard to have a rapid feedback loop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='4 Limitations of our Work Our survey was limited to 50 examples, taken from a limited set of sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' While not exhaustive, the examples implement a range of design techniques across a variety of applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We do not pro- vide an empirical evaluation of the work surveyed, instead choosing to use techniques of film criticism in order to analyze patterns used and identify the communication intentions of their producers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We similarly did not empirically evaluate our own work, and instead provide an account of our design process and detail our reasoning for important decisions that were made along the way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Our work does not fully utilize the design space of cinematic visualizations that we identified;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' for example, we did not use sound at all, and all narration was done through written text with a few small overlays in the video.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The experience might be improved by incorporating narration either on-screen or off [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 6 CONCLUSION We presented cinematic visualization, a genre of narrative visu- alization that uses techniques from cinema in order to enhance the presentation of data-driven stories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' A central contribution of this work is to identify a new genre of narrative visualization that we then analyze in depth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The importance of genre is clear in other art forms like literature and cinema;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' however, it is invoked less often in the context of visualization research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We believe that this type of work is crucial for understanding the design of narrative visualiza- tions, and thinking rigorously about how they can be constructed and deployed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' While past work on narrative visualization looked specifically at the narrative structure, here we look at both narra- tive and style as formal systems that contribute to the dramatic experience of watching a cinematic visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' To do this, we turned to theory from another form of art, film, in order to provide grounding in the features of style, and used analysis techniques established in that domain to deconstruct our case studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We analyzed a variety of examples of cinematic visualization and the techniques that they employ towards certain narrative applica- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Many of these visualizations show a relatively small amount of data (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=', focusing on a single rate or quantity) as opposed to being data-dense.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The non-data elements of the scene play an im- portant role: they are used to set the location in which the shot is taking place and provide cues to viewers about where they are, what they are looking at, and why it is relevant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' This approach is quite different from typical information visualizations, where data may be reduced to a minimal form, such as a line or a bar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Cinematic visualization instead tends to be more maximal in its approach, such that the non-data ink is not reduced or omitted, but rather used to build up entire digital worlds around data points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' This style encourages viewers to feel present in locations augmented with data objects, or to viscerally experience events that happened in the past, or are happening far away in the universe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Rendering data in 3D is a fraught endeavor, as the values being rendered can be obscured by humans’ relatively poor ability to estimate and compare volume, and because the 3D projection can introduce distortion when trying to read values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Why would the cre- ators choose to follow a cinematic path over one that more clearly and directly communicates the underlying data with precision?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' We argue that in choosing to treat a visualization as a cinematic experience, its authors might be looking beyond the immediate data, in order to viscerally ground that data in meaningful context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In other words, analytic precision is only one of several objectives that a visualization might help accomplish.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In choosing 3D, we might diminish precision in service of other objectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Cinematic Techniques in Narrative Visualization , , ACKNOWLEDGEMENTS We would like to thank Susan Callery, Holly Shaftel, Randal Jackson, Daniel Bailey, Michael Gunson, Josh Willis, Joe Witte, and the Earth Science Communications Team at NASA’s Jet Propulsion Laboratory for their support of this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' A portion of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' REFERENCES [1] Fereshteh Amini, Nathalie Henry Riche, Bongshin Lee, Christophe Hurter, and Pourang Irani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Understanding data videos: Looking at narrative visualiza- tion through the cinematography lens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' ACM, 1459–1468.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [2] Fereshteh Amini, Nathalie Henry Riche, Bongshin Lee, Andres Monroy- Hernandez, and Pourang Irani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Authoring data-driven videos with Dat- aClips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' IEEE transactions on visualization and computer graphics 23, 1 (2016), 501–510.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [3] Benjamin Bach, Zezhong Wang, Matteo Farinella, Dave Murray-Rust, and Nathalie Henry Riche.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Design patterns for data comics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' ACM, 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [4] M Pauline Baker and Colleen Bushell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' After the storm: Considerations for information visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' IEEE Computer Graphics and Applications 15, 3 (1995), 12–15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [5] David Bordwell, Kristin Thompson, and Jeff Smith.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Film art: An introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' McGraw-Hill New York.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [6] Kalina Borkiewicz, AJ Christensen, Helen-Nicole Kostis, Greg Shirah, and Ryan Wyatt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Cinematic Scientific Visualization: The Art of Communicating Science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In ACM SIGGRAPH 2019 Courses (Los Angeles, California) (SIGGRAPH ’19).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' ACM, New York, NY, USA, Article 5, 273 pages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='1145/ 3305366.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='3328056 [7] Michelle A Borkin, Zoya Bylinskii, Nam Wook Kim, Constance May Bainbridge, Chelsea S Yeh, Daniel Borkin, Hanspeter Pfister, and Aude Oliva.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Be- yond memorability: Visualization recognition and recall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' IEEE transactions on visualization and computer graphics 22, 1 (2015), 519–528.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [8] Michelle A Borkin, Azalea A Vo, Zoya Bylinskii, Phillip Isola, Shashank Sunkavalli, Aude Oliva, and Hanspeter Pfister.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' What makes a visualization memorable?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' IEEE Transactions on Visualization and Computer Graphics 19, 12 (2013), 2306– 2315.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [9] Jeremy Boy, Anshul Vikram Pandey, John Emerson, Margaret Satterthwaite, Oded Nov, and Enrico Bertini.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Showing people behind data: Does anthro- pomorphizing visualizations elicit more empathy for human rights data?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' ACM, 5462–5474.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [10] Judd D Bradbury and Rosanna E Guadagno.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Documentary narrative vi- sualization: Features and modes of documentary film in narrative visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Information Visualization 19, 4 (2020), 339–352.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [11] Matthew Brehmer, Bongshin Lee, Nathalie Henry Riche, Darren Edge, Christo- pher White, Kate Lytvynets, and David Tittsworth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Microsoft Timeline Storyteller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' https://timelinestoryteller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='com.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [12] Paul Cairns, Anna Cox, and A Imran Nordin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Immersion in digital games: review of gaming experience research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Handbook of digital games 1 (2014), 767.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [13] S Card, J Mackinlay, and B Shneiderman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Readings in information visualiza- tion: using vision to think.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Morgan Kaufmann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [14] Fanny Chevalier, Romain Vuillemot, and Guia Gali.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Using concrete scales: A practical framework for effective visual depiction of complex measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' IEEE transactions on visualization and computer graphics 19, 12 (2013), 2426–2435.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [15] Marc Christie and Eric Languénou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' A constraint-based approach to camera path planning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In International Symposium on Smart Graphics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Springer, 172–181.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [16] William S Cleveland and Robert McGill.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 1984.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Graphical perception: Theory, ex- perimentation, and application to the development of graphical methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Journal of the American statistical association 79, 387 (1984), 531–554.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [17] Andy Cockburn and Bruce McKenzie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' An evaluation of cone trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In People and Computers XIV—Usability or Else!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Springer, 425–436.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [18] Matthew Conlen and Jeffrey Heer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Idyll: A Markup Language for Authoring and Publishing Interactive Articles on the Web.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In ACM User Interface Software & Technology (UIST).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' http://idl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='washington.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='edu/papers/idyll [19] Matthew Conlen, Alex Kale, and Jeffrey Heer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Capture & Analysis of Active Reading Behaviors for Interactive Articles on the Web.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Computer Graphics Forum (Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' EuroVis) (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' http://idl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='washington.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='edu/papers/idyll-analytics [20] Robert L Cook, Loren Carpenter, and Edwin Catmull.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 1987.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The Reyes image rendering architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' ACM SIGGRAPH Computer Graphics 21, 4 (1987), 95–102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [21] Maxime Cordeil, Andrew Cunningham, Benjamin Bach, Christophe Hurter, Bruce H Thomas, Kim Marriott, and Tim Dwyer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' IATK: An Immersive Analytics Toolkit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' IEEE, 200–209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [22] Donna Cox and PA Fishwick.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Metaphoric mappings: The art of visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Aesthetic computing (2006), 89–114.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [23] Tim Dobbert.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Matchmoving: the invisible art of camera tracking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' John Wiley & Sons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [24] Anthony J Ferri.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Willing suspension of disbelief: Poetic faith in film.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Lexington Books.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [25] Michael Freeman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The photographer’s eye: composition and design for better digital photos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' CRC Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [26] Gustav Freytag.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 1904.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Technique of the drama: An exposition of dramatic composi- tion and art.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Scott, Foresman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [27] John Gibbs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Mise-en-scène: film style and interpretation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Wallflower Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [28] Esther Greussing and Hajo G Boomgaarden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Simply bells and whistles?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Cognitive effects of visual aesthetics in digital longforms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Digital Journalism 7, 2 (2019), 273–293.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [29] Neil Halloran.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The Fallen of World War II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' http://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='fallen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='io/ww2/ [30] Jeffrey Heer and George Robertson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Animated Transitions in Statistical Data Graphics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Visualization & Comp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Graphics 13, 6 (2007), 1240–1247.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [31] Mark Henne, Hal Hickel, Ewan Johnson, and Sonoko Konishi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The making of toy story [computer animation].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In COMPCON’96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Technologies for the Information Superhighway Digest of Papers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' IEEE, 463–468.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [32] Tomlinson Holman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Sound for film and television.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Taylor & Francis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [33] Jessica Hullman and Nick Diakopoulos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Visualization rhetoric: Framing effects in narrative visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' IEEE transactions on visualization and computer graphics 17, 12 (2011), 2231–2240.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [34] Jessica Hullman, Steven Drucker, Nathalie Henry Riche, Bongshin Lee, Danyel Fisher, and Eytan Adar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' A deeper understanding of sequence in narrative visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' IEEE Transactions on visualization and computer graphics 19, 12 (2013), 2406–2415.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [35] Petra Isenberg, Bongshin Lee, Huamin Qu, and Maxime Cordeil.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Immersive Visual Data Stories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In Immersive Analytics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Springer, 165–184.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [36] Alexander Ivanov, Kurtis Danyluk, Christian Jacob, and Wesley Willett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' A Walk Among the Data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' IEEE computer graphics and applications 39, 3 (2019), 19–28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [37] Yvonne Jansen, Pierre Dragicevic, Petra Isenberg, Jason Alexander, Abhijit Karnik, Johan Kildal, Sriram Subramanian, and Kasper Hornbæk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Opportunities and challenges for data physicalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 3227–3236.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [38] Craig Kolb, Don Mitchell, Pat Hanrahan, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' A realistic camera model for computer graphics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In SIGGRAPH, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 317–324.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [39] Robert Kosara and Jock Mackinlay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Storytelling: The next step for visual- ization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Computer 46, 5 (2013), 44–50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [40] John Lasseter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 1987.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Principles of traditional animation applied to 3D computer animation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In ACM Siggraph Computer Graphics, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' ACM, 35–44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [41] Benjamin Lee, Dave Brown, Bongshin Lee, Christophe Hurter, Steven Drucker, and Tim Dwyer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Data Visceralization: Enabling Deeper Understanding of Data Using Virtual Reality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' arXiv preprint arXiv:2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='00059 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [42] Renae Low and John Sweller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The modality principle in multimedia learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The Cambridge handbook of multimedia learning 147 (2005), 158.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [43] Kwan-Liu Ma, Isaac Liao, Jennifer Frazier, Helwig Hauser, and Helen-Nicole Kostis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Scientific storytelling using visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' IEEE Computer Graphics and Applications 32, 1 (2011), 12–19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [44] Kim Marriott, Falk Schreiber, Tim Dwyer, Karsten Klein, Nathalie Henry Riche, Takayuki Itoh, Wolfgang Stuerzlinger, and Bruce H Thomas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Immersive Analytics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 11190.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Springer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [45] David McCandless.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Information is beautiful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Collins London.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [46] Sean McKenna, N Henry Riche, Bongshin Lee, Jeremy Boy, and Miriah Meyer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Visual narrative flow: Exploring factors shaping data visualization story reading experiences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In Computer Graphics Forum, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Wiley Online Library, 377–387.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [47] Timothy Morton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Hyperobjects: Philosophy and Ecology after the End of the World.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' U of Minnesota Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [48] NASA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Vital signs of the planet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' https://climate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='nasa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='gov/vital-signs/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [49] Rajendra K Pachauri, Myles R Allen, Vicente R Barros, John Broome, Wolfgang Cramer, Renate Christ, John A Church, Leon Clarke, Qin Dahe, Purnamita Das- gupta, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Climate change 2014: synthesis report.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Contribution of Working Groups I, II and III to the fifth assessment report of the Intergovernmental Panel on Climate Change.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Ipcc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [50] PacificVis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Visual Data Storytelling Contest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' https://vimeo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='com/pviscontest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [51] Deokgun Park, Steven M Drucker, Roland Fernandez, and Niklas Elmqvist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Atom: A grammar for unit visualizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' IEEE transactions on visualization and computer graphics 24, 12 (2017), 3032–3043.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [52] Matt Pharr, Wenzel Jakob, and Greg Humphreys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Physically based rendering: From theory to implementation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Morgan Kaufmann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [53] Carol Rasmussen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' A Partnership Forged by Fire.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' https://climate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='nasa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='gov/news/2899/a-partnership-forged-by-fire/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' , , Conlen, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [54] Karel Reisz and Gavin Millar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 1971.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The technique of film editing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' (1971).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [55] Donghao Ren.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Visualization Authoring for Data-driven Storytelling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Dissertation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' UC Santa Barbara.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [56] Arvind Satyanarayan and Jeffrey Heer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Authoring narrative visualizations with ellipsis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In Computer Graphics Forum, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Wiley Online Library, 361–370.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [57] Edward Segel and Jeffrey Heer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Narrative visualization: Telling stories with data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' IEEE transactions on visualization and computer graphics 16, 6 (2010), 1139–1148.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [58] Debika Shome and Sabine Marx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The psychology of climate change com- munication: a guide for scientists, journalists, educators, political aides, and the interested public.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Center for Research on Environmental Decisions, New York.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [59] Ronell Sicat, Jiabao Li, JunYoung Choi, Maxime Cordeil, Won-Ki Jeong, Benjamin Bach, and Hanspeter Pfister.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Dxr: A toolkit for building immersive data visualizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' IEEE transactions on visualization and computer graphics 25, 1 (2018), 715–725.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [60] Charles D Stolper, Bongshin Lee, N Henry Riche, and John Stasko.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Emerging and recurring data-driven storytelling techniques: Analysis of a curated collection of recent stories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [61] Frank Thomas, Ollie Johnston, and Frank Thomas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The illusion of life: Disney animation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Hyperion New York.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [62] J Thompson, Z Liu, W Li, and J Stasko.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Understanding the Design Space and Authoring Paradigms for Animated Data Graphics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In Computer Graphics Forum, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Wiley Online Library, 207–218.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [63] Hamish Todd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Virus, the Beauty of the Beast.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' http://viruspatterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='com/ [64] Edward R Tufte.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The visual display of quantitative information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Graphics press Cheshire, CT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [65] Craig Upson, TA Faulhaber, David Kamins, David Laidlaw, David Schlegel, Jefrey Vroom, Robert Gurwitz, and Andries Van Dam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 1989.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' The application visual- ization system: A computational environment for scientific visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' IEEE Computer Graphics and Applications 9, 4 (1989), 30–42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [66] Jagoda Walny, Christian Frisson, Mieka West, Doris Kosminsky, Søren Knudsen, Sheelagh Carpendale, and Wesley Willett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Data Changes Everything: Chal- lenges and Opportunities in Data Visualization Design Handoff.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' IEEE transactions on visualization and computer graphics 26, 1 (2019), 12–22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [67] Colin Ware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Information visualization: perception for design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Elsevier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [68] I-Cheng Yeh, Chao-Hung Lin, Hung-Jen Chien, and Tong-Yee Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Effi- cient camera path planning algorithm for human motion overview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Computer Animation and Virtual Worlds 22, 2-3 (2011), 239–250.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' [69] Qiyu Zhi, Alvitta Ottley, and Ronald Metoyer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Linking and Layout: Ex- ploring the Integration of Text and Visualization in Storytelling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In Computer Graphics Forum, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Wiley Online Library, 675–685.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Cinematic Techniques in Narrative Visualization , , Figure 7: We analyzed the style of 50 cinematic visualizations using the features of mise-en-scène, cinematography, editing, and sound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' An HTML version of this table including URLs for each row can be found at https://cinematic-visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='io/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Cinematography Mise-en-scene Editing Audio rative Text Composited Annotations Jser-controlled camera Realistic Background Opacity mera er-triggered steps 3617 - L n-Situ Annotations Composited Narra Point-of-view can nera Visual Analogy rator it Visualizatiol Scale punos oeba Reality n-situ Narra notation Annotation Annotation dded ncrete erview pax ISIC 0 Author Publishel Titte cV1 Drowning in plastic Simon Scarr, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Reuters CV2 Is This the Neighborhood New York Deserves?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Michael Kimmelman NYT CV3 Krigsskipet som krasjet og sank B Stangvik, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' VG CV4 How China Turned a City Into a Prison Chris Buckley, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=" NYT CV5 The Forbidden City's unique architecture Marco Hernandez South China Morning Post CV6 Here are 120 million Monopoly pieces, roughly one." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Confessore, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=" NYT CV7 Building Katie's New Face Jason Treat NatGeo CV8 Mass Exodus: The scale of the Rohingya crisis Christian Inton Reuters This 3-D Simulation Shows Why Social Distancing." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' CV9 Parshina-Kottas, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=" NYT CV10 Tracking China's Muslim Gulag Simon Scarr Reuters CV11 Choice and Chance Staff Tampa Bay Times cV12 Cassini's Grand Tour Nadia Drake, et al." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' NatGeo CV13 Resurecting a Dragon Brian T Jacobs NatGeo CV14Want to fireproof your house?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=" Here's where to start Kyle Kim, et al." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' LATimes CV15Apoll 11 -As They Shot It Jonathan Corum, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=" NYT CV16 The Atlas of Moons NatGeo Staff NatGeo CV17 Explore a Toad's Digital Clone Brian T." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Jacobs NatGeo CV18 THE THOMAS FIRE: 40 DAYS OF DEVASTATION Joe Fox LATimes D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' CV19 Is the Nasdag in Another Bubble?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Roger Kenny, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Wall Street Journal CV20 A 3-D View of a Chart That Predicts The Econom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Gregor Aisch, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' NYT CV21 Inside the Taser Simon Scarr Reuters CV22 Seeing Earth from Outer Space Matthew Conlen The Pudding C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' CV23 Of Catastrophes and Rescues: Making the Invisib.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='. Peter Mindek, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' PacificVis CV24A Visualization of Two-stage Autoignition of n-dod.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='. Yucong Ye, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' PacificVis CV25 How Mariano Rivera Dominates Hitters Graham Roberts, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' NYTimes Real World Visuals Real World Visuals CV27 CCS:A2 degree solution Real World Visuals Real world Visuals CV28CARS Real World Visuals Real world Visuals CV29[REALISTIC] Elephant rocket fuel - Saturn V Maxim Sachs YOUTUBE CV30 University of Exeter greenhouse gas emissions in R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='. Real World Visuals Real world Visuals CV31if The World Were 100 People Gabriel Reilich, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Good Magazine CV32 Up - and down - from Ground Zero Graham Roberts, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' NYT CV33 The Birth of a Virtual Cell Peter Mindek, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' PacificVis CV34 The Nuclear Threat - The Shadow Peace Neil Halloran Youtube CV35What f Carbon Left Your Tailpipe as Solid Chunks?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Sukee Bennett PBS Nova CV36Stay Home,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Flatten the Curve keta Youtube CV37 Chart Party: We decided to erase the three-pointer Jon Bois Youtube cV38 200 Countries,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 200 Years,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' 4 Minutes ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Hans Rosling ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Youtube ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content="CV39 The best stats you've ever seen " metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Hans Rosling ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='TED ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='CV40 VFX Artist Reveals the True Scale of the Universe ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Wren Weichman ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Corridor Crew ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='CV41 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Helge Ingstad ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Hallvard Sandberg ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='NRKbeta ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='CV42The dangers of storm surge ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='The Weather Channel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='The Weather Channel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='CV43 Survive the Tornado ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='The Weather Channel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='The Weather Channel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='CV44 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Television Elections Coverage ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='KING5 TV ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='KING5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='CV45 Powers of Ten ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='Eames & Eames ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='IBM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='CV46 Strange things happen when you rotate in 4 dimen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Hamish Todd Youtube CV47 Discovering Gale Crater Armand Emamdjomeh LATimes A CV48 Taiwan earthquake: Survivors found in rubble of Ta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=". Malachy Brown, SketchFab / Australian cV49 Four of the Best Olympians, as You've Never Seen." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' John Branch NYT CV50 How We Created a Virtual Crime Scene to Investig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Malachy Brown, NYT CV13 CV40 CV26 CV35 D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Author-Guided A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' In-situ Narrator B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Anthropocentric Perspective C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'} +page_content=' Resolving Scale Interactive Camera' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfWAR_/content/2301.03109v1.pdf'}