page_content
stringlengths
1.48k
5.24k
metadata
dict
arXiv:2503.07584v3 [cs.IR] 24 Jun 2025 Talking to GDELT Through Knowledge Graphs Audun Myers, Max Vargas, Sinan G. Aksoy, Cliff Joslyn, Benjamin Wilson, Lee Burke, Tom Grimes Abstract In this work we study various Retrieval Augmented Regeneration (RAG) approaches to gain an understanding of the strengths and weaknesses of each approach in a question-answering analysis. To gain this understanding we use a case-study subset of the Global Database of Events, Language, and Tone (GDELT) dataset as well as a corpus of raw text scraped from the online news articles. To retrieve information from the text corpus we implement a traditional vector store RAG as well as state-of-the-art large language model (LLM) based approaches for automatically constructing KGs and retrieving the relevant subgraphs. In addition to these corpus approaches, we develop a novel ontology-based framework for constructing knowledge graphs (KGs) from GDELT directly which leverages the underlying schema of GDELT to create structured representations of global events. For retrieving relevant information from the ontology-based KGs we implement both direct graph queries and state-of-the-art graph retrieval approaches. We compare the performance of each method in a question-answering task. We find that while our ontology-based KGs are valuable for question-answering, automated extraction of the relevant subgraphs is challenging. Conversely, LLM-generated KGs, while capturing event summaries, often lack consistency and interpretability. Our findings suggest benefits of a synergistic approach between ontology and LLM-based KG construction, with proposed avenues toward that end. 1 Introduction In this work we study several approaches for communicating with a corpus of text via relevant text and knowledge graph (KG) representation and retrieval facilitated by Large Language Models (LLMs). Our goal is to understand the benefits and drawbacks of Retrieval Augmented Generation (RAG) approaches to corpus management and anlysis when combined with an LLM. Throughout we use as a case study a novel KG derived from the Global Data on Events, Location, and Tone (GDELT)1 [13] dataset. As a way to enhance LLM outputs, researchers and practitioners have been quick in applying LLMs to query and understand proprietary data through retrieval-augmented-generation (RAG) [14]. It has been shown that reasoning over the typical RAG framework, which only takes advantage of the unstructured text articles, fails to capture global information about the provided data [5, 27]. Motivated by this limitation, there has been recent interest in adapting these techniques to the case where our data has a graph structure, so that the LLM can directly ingest important relationships in the knowledge base [5, 7, 15, 31]. More specifically, KGs [8] are graph structures which are richly attributed with typing and semantic information on both nodes and edges. KG techniques provide ways to automatically query and extract information stored in a KG without the user explicitly needing to understand query languages to probe their knowledge base. Typically, these AI-based search algorithms find subgraphs that can be used to answer a user-provided query. The interactions between KGs and LLMs have potential beyond merely question-answering and knowledge extraction (see different research directions outlined by Pan et al. [18]). In particular, reflective of KGs being used to enhance LLM outputs, LLMs can be used to enhance existing KGs or create new ones entirely from scratch. However, exploration of techniques to this end either (1) do not deal with imposing different ontological structures in graph creation or (2) only focus on extracting ontological structures using LLMs 1https://www.gdeltproject.org/ 1
{ "author": "Audun Myers; Max Vargas; Sinan G. Aksoy; Cliff Joslyn; Benjamin Wilson; Lee Burke; Tom Grimes", "creationDate": "", "creationdate": "", "creator": "arXiv GenPDF (tex2pdf:)", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "format": "PDF 1.5", "keywords": "", "modDate": "", "moddate": "", "page": 0, "producer": "pikepdf 8.15.1", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "subject": "", "title": "Talking to GDELT Through Knowledge Graphs", "total_pages": 12, "trapped": "" }
[23, 28]. Throughout this work we use the GDELT dataset as a case study. GDELT is a massive collection of news reports that provide a real-time computational record of global events that is published every 15 minutes. It aggregates information from various news sources, blogs, and social media platforms to construct a large collection of data including information on people, organizations, locations, themes, and emotions. Essentially, GDELT offers a snapshot of the world’s collective events, enabling researchers and analysts to explore complex patterns and relationships within global society. By analyzing this data, it’s possible to identify emerging trends, assess risks, understand public sentiment, and track the evolution of various issues over time. The applications of GDELT are diverse and far-reaching. Some of the most common use cases including event monitoring [16, 17, 29], risk assessment and prediction [6, 19, 20, 24, 26, 30], and social science research [2, 3, 4, 12]. GDELT describes its structure as a Global Knowledge Graph (GKG, specifically, we use the the Global Knowledge Graph edition 2 (GKG2) of GDELT). But in fact GDELT-GKG2 is implemented as multiple linked tables recording information about the relationship between articles and events, and thus effectively has the structure of a relational database. Another important contribution of this paper is to actually realize GKG2 properly in the mathematical form of a KG, effectively a graph database, derived from and consistent with its native relational database form. To facilitate this effort, we have identified a lightweight ontology for GDELT in the form of its graph schema, realizing its relational database schema in a KG form. Using the KG that we construct from the GDELT-GKG2 dataset, we provide a case study to explore the utility of LLM-based tools to extract information and confirm that the KG can be used for question-answering in cases where traditional RAG fails. As part of our analysis, we compare to KGs produced from processing various news articles with an LLM, prompting it to try and adhere to a reduced version of the same ontology. The current state of neurosymbolic work is noted for the plethora of experimental architectures available. While details are explicated below in Section 3.2, we preview ours in Figure 1, including the five method- ological pathways which are quantitatively compared: 1) graph queries on the KG (called the DKG) derived “directly” from GKG2; 2) use of G-Retriever2 [7] against the same DKG; 3) RAG against a vector store representation of GKG2; 4) G-Retriever against a second KG (called the LKG) derived from using Llamain- dex3 [1] against the GDELT source articles; and 5) GraphRAG4 Q&A deployed against a third KG (called GRKG) using Microsoft’s open-source GraphRAG package with default configuration parameters. Figure 1: Pipeline of different experiments ran to analyze the GDELT database using an LLM. 2https://github.com/XiaoxinHe/G-Retriever 3https://www.llamaindex.ai/ 4https://microsoft.github.io/graphrag/ 2
{ "author": "Audun Myers; Max Vargas; Sinan G. Aksoy; Cliff Joslyn; Benjamin Wilson; Lee Burke; Tom Grimes", "creationDate": "", "creationdate": "", "creator": "arXiv GenPDF (tex2pdf:)", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "format": "PDF 1.5", "keywords": "", "modDate": "", "moddate": "", "page": 1, "producer": "pikepdf 8.15.1", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "subject": "", "title": "Talking to GDELT Through Knowledge Graphs", "total_pages": 12, "trapped": "" }
2 Constructing a Knowledge Graph for GDELT As previously mentioned, while the GDELT-GKG2 dataset is not actually natively in the form of a knowledge graph, it is advertised and frequently cited as being one. We believe that we are making a distinct contribution to the research community by converting the very popular GKG2 database into a proper KG. GKG2 is natively a database of three related tables: • expert.csv captures event information; • GKG.csv captures article information; and • mentions.csv relates which articles mention which events. Figure 2: GDELT GKG 2.0 schema relating articles (GKG), mentions, and events (Export). The database schema for these three CSV files is shown in Fig. 2 (see also [9]). The key characteristics of this database relational schema should be interpreted as follows: • The three tables are color-coded by Events (green), Mentions (pink), and Articles (blue). • The single table of Events is shown in multiple green sub-tables, simply for clarity and convenience to layout a long record structure. • Single-headed arrows represent one-to-many relationships between the tables. Specifically: – Each Event maps to multiple Mentions via the shared GLOBALEVENTID field. – Each Article maps to multiple Mentions via the DocumentIdentifer field on the Article side matching to the MentionIdentifier field on the Mention side. • In this way, the Mentions table acts as a “pairs file” recording a many-many relation between Events and Articles: each event can be mentioned in multiple articles, and dually each article can men- tion many events. Each Article also has both a unique identifier through the GKGRECORDID or the DocumentIdentifer fields, since each row in the GKG data represents a single article. 3
{ "author": "Audun Myers; Max Vargas; Sinan G. Aksoy; Cliff Joslyn; Benjamin Wilson; Lee Burke; Tom Grimes", "creationDate": "", "creationdate": "", "creator": "arXiv GenPDF (tex2pdf:)", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "format": "PDF 1.5", "keywords": "", "modDate": "", "moddate": "", "page": 2, "producer": "pikepdf 8.15.1", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "subject": "", "title": "Talking to GDELT Through Knowledge Graphs", "total_pages": 12, "trapped": "" }
Figure 3: GDELT GKG 2.0 ontology relating articles and events.. Methods to automatically determine the graphical form of a relational database are widely known [21]. Most naturally, consider a table T with m rows T[j], 1 ≤j ≤m and n columns T.i, 1 ≤i ≤n. Then each of the m rows T[j] is represented as a node in one meta-class labeled by the primary keys. This node then has n outgoing edges, each connecting to a node in another meta-class representing the field value T[j].i, and labeled by the column name. The resulting “star” bipartite graphs are then linked over shared values, including across multiple tables. This method straightforwardly produces a graph schema consistent with a given RDB, which may or may not be of sufficient complexity to warrant the lofty description of “ontology”. In our case, such a straightforward approach is mostly warranted, although as we will see additional constraints in the event table will argue for a somewhat more specific and idosyncratic graph structure. After understanding the GDELT database schema, we developed a capability to convert (portions of) the GDELT database to a KG using an ontology as a graph typing schema, derived from the above relational schema. This is shown in Fig. 3, to be interpreted as follows: • Nodes in the ontology indicate the types of nodes possible in the KG. • Nodes are color-coded to indicate their source relational table. • Fields in ⟨angled brackets⟩indicate the field name in the schema. 4
{ "author": "Audun Myers; Max Vargas; Sinan G. Aksoy; Cliff Joslyn; Benjamin Wilson; Lee Burke; Tom Grimes", "creationDate": "", "creationdate": "", "creator": "arXiv GenPDF (tex2pdf:)", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "format": "PDF 1.5", "keywords": "", "modDate": "", "moddate": "", "page": 3, "producer": "pikepdf 8.15.1", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "subject": "", "title": "Talking to GDELT Through Knowledge Graphs", "total_pages": 12, "trapped": "" }
• Solid edges indicate a field in a relational table and are labeled with the type of semantic relation. • Dashed and bold edges indicate the structural, one-to-many relations in the relational schema. The naming convention also captures the unique identifier for these csv files, so that ⟨GLOBALEVENTID⟩ identifies unique Events, the pair (⟨GLOBALEVENTID⟩, ⟨MentionIdentifier⟩) identifies unique Mentions, as does ⟨DocumentIdentifier⟩for Articles. We again note that the document and mention identifiers are the same field, but have different field names (e.g., a URL is typically used for the document identifier and the same URL is used for the mention identifier). 3 Case Study - Baltimore Bridge Collapse Here we will provide an analysis of data collected over a recent and short period of time to do question- answering based analysis. The point of collecting recent data is that the LLMs used have not yet been trained on these events (at the time of this study) and thus the knowledge systems are needed to supply the LLM with relevant information. Specifically, this analysis uses a subset of the GDELT data collected on March 26th of 2024 from 12:00 AM to 10:00 AM during and after the collapse of the Francis Scott Key Bridge in Baltimore, Maryland, which occurred at approximately 1:29 AM. This 10 hour window of time captures the media response to this disaster. We filtered down the collected mentions data to only include rows in any of the related data if it included any of the keywords “Baltimore”, “bridge”, “collapse”, or “ship”. We then used all GLOBALEVENTIDs and MentionIdentifiers in this reduced mentions file to collect the relevant events and articles. This filtration resulted in using approximately 1.33% of the available data with 371 events, 2047 mentions, and 209 articles. 3.1 GDELT Knowledge Graphs Using the GDELT data directly and the scraped text we can construct a total of three KGs: Direct KG (DKG): The first KG was simply a direct conversion of the subset of the original GDELT data into an equivalent KG as specified by our ontology in Fig. 3. This KG is shown in Fig. 4a. LlamaIndex KG (LKG): The second KG was generated by an LLM deployed against a text corpus con- sisting of the source articles collected by scraping URLs of the 209 articles included in our GDELT subset, and enriched with knowledge of the ontology. This KG is shown in Fig. 4b. GraphRAG KG (GRKG): The final KG was generated using the same articles as the LKG, using Mi- crosoft’s open-source GraphRAG package with default configuration parameters. This KG is shown in Fig. 4c. The example KG constructed using our ontology (DKG) is shown in a reduced form in Fig. 4a. The nodes are color coded based on their source. Note that node and edge labels are not shown in order to facilitate clarity. This KG is one component and has 3,469 nodes and 18,052 edges. To construct a KG directly from the corpus of source document text (LKG) shown in Fig. 4b, we used Mixtral-8x7B [11] as our base model, following the procedure outlined by the LlamaIndex package developers [1]. The LLM is prompted to extract triples from the news articles according to a prompt provided in the parameter kg triplet prompt. Using the default prompt, the ontology does not get incorporated and the resulting KG is a star-shaped graph with a single central node and all other nodes connected to this center, there being no other edges. When we change the prompt to consider the entire ontology, we again get a star-shaped graph. Nontrivial graph structure arose when we prompted the language model with a reduced version of the ontology with adaptation for unstructured text. In particular, our prompt asked for: • Vertices of one of the following types: “Event”, “Article”, “Mention”, “Person”, “Quotation”, “Orga- nization”, “Location”, and “Other”, with the last type serving as a catch-all. 5
{ "author": "Audun Myers; Max Vargas; Sinan G. Aksoy; Cliff Joslyn; Benjamin Wilson; Lee Burke; Tom Grimes", "creationDate": "", "creationdate": "", "creator": "arXiv GenPDF (tex2pdf:)", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "format": "PDF 1.5", "keywords": "", "modDate": "", "moddate": "", "page": 4, "producer": "pikepdf 8.15.1", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "subject": "", "title": "Talking to GDELT Through Knowledge Graphs", "total_pages": 12, "trapped": "" }
(a) Example DKG constructed from ontology with no labels, but color coding set to match ontology. (b) Example LKG constructed cor- pus of text using LlamaIndex. (c) Example GRKG constructed cor- pus of text using GraphRAG, re- moving all isolated nodes. Large nodes have degree ≥25. Figure 4: KG formations from GDELT Data of Baltimore Bridge collapse event. This subset of data included 27 articles with 283 related mentions to 143 events during the Baltimore bridge collapse from midnight to 10:00 AM EST. The corpus of text was created from web scraping the 27 URLs associated to the articles. • Edges of one of the following type: “Related to event where”, “Has quotation”, “Mentions person”, “Mentions location”, “Mentions organization”, “Committed action of”, “Has event acted on”, and “Affiliated with”, again with the last type serving as a catch-all. While this reduced description produces a non-trivial knowledge graph, it is worth noting that node and edge types still struggle to adhere to the prescribed structure, potentially due to the nature of hallucinations. The results are shown in Fig. 4b. This pipeline also struggles with entity and relation resolutions, for example creating separate nodes for ‘Container ship’ and ‘Container ship.’ The construction of the GRKG required a language model whose context window exceeded the capabilities of Mixtral-8x7B. We decided to use Llama-3.1-8B [22] for this case. It is worth noting that GraphRAG indiscriminately generates relations without the prescription of any ontology. It does, however, identify a specified set of entities with defaults being “organizations”, “persons”, “geo(locations)”, and “events.” Similar to the LlamaIndex pipeline, GraphRAG struggles with entity resolution — an example of which can be seen from the existence of separate nodes for “DALI” and “THE DALI.” It also spawns many small components, often being isolated nodes; 435 of 968 total nodes are isolated in this example. 3.2 Knowledge Graph Analysis Methodology The three graphs in Fig. 4 show the significant size difference between the DKG, LKG, and GRKG. This is potentially due to the summarizing nature of LlamaIndex and GraphRAG to only capture the key edges and nodes. Further, we find that the LLM used to create the LKG had considerable trouble with adhering to the prescribed ontology, creating many new edge types outside those originally prescribed. For example, the LLM creates a triple (Maersk, Chartered, Container ship) when using LlamaIndex. While factually correct, the Chartered edge type is not specified in the LLM prompt. Even though the LLM struggles to adhere to our proposed structures, many of the edges that are mined from the article text are easily interpretable. To validate the quality of our ontology and to show some of its use cases in an automatic way we provide a qualitative comparison where we use an LLM for question-answering on the GDELT dataset. In total 6
{ "author": "Audun Myers; Max Vargas; Sinan G. Aksoy; Cliff Joslyn; Benjamin Wilson; Lee Burke; Tom Grimes", "creationDate": "", "creationdate": "", "creator": "arXiv GenPDF (tex2pdf:)", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "format": "PDF 1.5", "keywords": "", "modDate": "", "moddate": "", "page": 5, "producer": "pikepdf 8.15.1", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "subject": "", "title": "Talking to GDELT Through Knowledge Graphs", "total_pages": 12, "trapped": "" }
we have five pipelines on how we implement an LLM for talking to GDELT. These pipelines are shown in Fig. 1, where in each case, we use an LLM to produce a final answer from the information obtained from each retrieval method. We note that this final processing is done with models with 7-8B parameters. With the GraphRAG pipeline we use Llama-3-8B [22] for question answering, and in all other cases we use Mistral-7B [10] in tandem with the E5-large-v2 [25] embedding model. In order, left to right: 1. Extract knowledge using direct graph queries to probe the DKG. This method requires an analyst to convert a natural language question into a suitable graph query. Consistent with other techniques, we then use an LLM to interpret and repackage the retrieved information. 2. Use G-retriever to automatically fetch a subgraph of the GDELT Knowledge Graph from a natural language question. This subgraph retrieval algorithm requires vectorizing the nodes and edges of the knowledge graph using a language embedding model. Once retrieved, the subgraph is fed into an LLM for interpretation. 3. Construct a knowledge graph by parsing full-text documents (scraped from GDELT URLs) with an LLM and LlamaIndex’s functionality. We then proceed as in (2). 4. Create a Vector Store using the text Corpus and ask questions in a typical RAG setting. This in- volves using a language embedding model to vectorize text articles (after splitting them into 500-token chunks). Given a question, we use the same embedding model to vectorize it and, using the Euclidean metric to identify nearby vectors, extract the most similar text snippets. The original question and its associated context are then fed into an LLM to process an answer. 5. Build a knowledge graph as in (3), using the GraphRAG ecosystem. We also use the provided question- answering capabilities of the GraphRAG package. We note that it is typically unnecessary to use an LLM in method (1) and the answer can often be inferred after seeing the raw output of the graph query. In this case, the LLM effectively repackages the result of the graph query in an expected way. As such, we view this first method as a ‘ground-truth’ on the constructed KG. Tautologically, if the DKG can be used to answer a question, then a suitable graph query is able to extract the correct answer. Conversely, if the KG cannot be used to answer a given question, then a suitable graph query can prove that the answer cannot be found in the KG. It is worth noting why we only employ graph queries on the DKG and not those constructed using LlamaIndex or GraphRAG. As noted in our empirical observations of the LLM-produced knowledge graphs, these graphs have far less structure than GDELT itself. This lack of defined structure makes it difficult to form useful and interesting graph queries. 3.3 Results Table 1 shows a sample of questions that were passed through each of the five pipelines from Fig. 1. Exact queries to the GDELT knowledge graph were generated by searching for keywords in the edge triples com- prising the knowledge graph. Specifically, we searched for keywords in these triples by converting each triple to a sentence (stored as a string) and searching therein. We then used the edge induced subgraph from the edge sentences where keywords were found. The following keywords were used for each question: • What is the name of the Bridge that collapsed and what river was it on?: Bridge, Collapse, River • What is the name of the ship that collided with the baltimore bridge?: Ship, Collide, Baltimore, Bridge • How many articles did CNN published about the baltimore bridge collapse?: CNN, Baltimore, Bridge, Collapse • On what date did the Baltimore Bridge collapse?: Date, Baltimore, Bridge, Collapse • Who is Brandon Scott?: Brandon Scott • Who is Niki Fennoy?: Niki Fennoy • What are the top themes present in the data?: Has Theme 7
{ "author": "Audun Myers; Max Vargas; Sinan G. Aksoy; Cliff Joslyn; Benjamin Wilson; Lee Burke; Tom Grimes", "creationDate": "", "creationdate": "", "creator": "arXiv GenPDF (tex2pdf:)", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "format": "PDF 1.5", "keywords": "", "modDate": "", "moddate": "", "page": 6, "producer": "pikepdf 8.15.1", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "subject": "", "title": "Talking to GDELT Through Knowledge Graphs", "total_pages": 12, "trapped": "" }
Graph Query on DKG G-Retriever on DKG RAG using Vector Store G-Retriever on LKG GraphRAG Q&A on GRKG What is the name of the Bridge that collapsed and what river was it on? The Francis Scott Key Bridge and it was on the Patapsco River in Maryland. The bridge is located in Sri Lanka. However, there is no explicit mention of the river’s name. The Francis Scott Key Bridge collapsed into the Patapsco River. The bridge that collapsed spanned over the Patapsco river. The Francis Scott Key Bridge which spans the the Patapsco River. What is the name of the ship that collided with the baltimore bridge? The name of the ship is not provided. However, the ship is a container ship and Singapore-flagged. There is no mention of the name of the ship that collided with the bridge. The name of the ship that collided with the Baltimore bridge is the Dali. The name of the ship that collided with the Baltimore bridge is Maersk. The ship that collided with the Baltimore bridge is the Dali. How many articles did CNN publish about the Baltimore bridge collapse? CNN published 5 articles about the Baltimore bridge collapse. CNN published a total of 2 articles about the Baltimore bridge collapse. CNN published three articles about the Baltimore bridge collapse. There is only one article published by CNN about the Baltimore bridge collapse. CNN published at least two articles about the Baltimore bridge collapse. On what date did the Baltimore Bridge collapse? The Baltimore Bridge collapsed on March 26, 2024. I cannot directly answer that question based on the given data. The Baltimore Bridge collapsed on March 26, 2024. The Baltimore Bridge collapsed at 1:20 a.m. The Baltimore Bridge collapsed on March 26, 2024. Who is Brandon Scott? Brandon Scott is a person mentioned in several news articles related to the collapse of the Francis Scott Key Bridge. Brandon Scott is the Mayor of Baltimore. Brandon Scott is the mayor of Baltimore, Maryland. Brandon Scott is not mentioned in the given data. Brandon Scott is the mayor of Baltimore, Maryland. Who is Niki Fennoy? Niki Fennoy is a person mentioned in various news articles related to the collapse of the Francis Scott Key Bridge. The article from thepeninsular- -qatar.com mentions Niki Fennoy. I don’t know. Niki Fennoy is not menti- -oned in the provided context. Niki Fennoy is not present in the given data. Niki Fennoy is a city police spokesman. What are the top themes present in the data? MARITIME INCIDENT MARITIME MANMADE DISASTER TAX FNCACT WB 137 WATER. MARITIME INCIDENT CRISIS TAX NEWS ETHNICITY. I don’t have enough information from to determine specific the themes present in the data. EVENTS AND THEIR RELATIONSHIPS, LOCATIONS, ORGANIZATIONS, VESSELS. NEWS AND UPDATES BRIDGE COLLAPSE CONSTRUCTION CREW SEARCH AND RESCUE COMMUNITY REPORT. Table 1: Table of example questions and answers highlighting deficiencies in each method for analyzing the GDELT data. Table highlight color legend: Green is a correct answer, yellow is a partially correct answer, red is an incorrect answer, and grey is for no answer provided. Our prompt was then constructed as the following: “Please answer the question given the following infor- mation:” with the list of edge sentences appended to the end. Solutions based on vector stores, GraphRAG, and direct graph queries on the DKG offer the best results for question answering. With direct graph queries, we can answer high-level and other types of questions that need us to aggregate information across the dataset. For example, we can easily extract information about recurrent themes or about a particular news source. While GraphRAG also provides functionality to answer high-level questions, we found that its performance lacked in this respect. It performed well on fine-grained questions. A vector store performed similarly well on these fine-grained questions that can be answered by a small number of excerpts from the source articles. Notably, the second column suggests that the automated search functionalities provided in G-retriever are often unable to retrieve a subgraph that can be used to provide an accurate answer. The problem gets worse when we use the KG created by LlamaIndex as our knowledge base, which is unable to suitably answer any question that we posed. These issues may partially be due to the out-of-box application of the G-retriever system and careful fine-tuning may improve performance. Regardless, we see the retained value in directly probing our data with hand-crafted queries and infer that further developments are needed for these automated information extraction systems to match the baseline performance on questions that require reasoning across the entire corpus. By combining the results of the direct graph queries on the DKG with those of GraphRAG and standard RAG, we can provide suitable answers to the all of the presented questions. On one hand, we see that the DKG can provide better responses to high-level or aggregate questions about our data. Conversely, the vector store and GRKG can be used to identify local information in the documents that might be missing in the DKG. Regarding popular subjects, direct prompting of the LLM without dealing with knowledge graphs or vector 8
{ "author": "Audun Myers; Max Vargas; Sinan G. Aksoy; Cliff Joslyn; Benjamin Wilson; Lee Burke; Tom Grimes", "creationDate": "", "creationdate": "", "creator": "arXiv GenPDF (tex2pdf:)", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "format": "PDF 1.5", "keywords": "", "modDate": "", "moddate": "", "page": 7, "producer": "pikepdf 8.15.1", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "subject": "", "title": "Talking to GDELT Through Knowledge Graphs", "total_pages": 12, "trapped": "" }
stores reveals that the LLM independently recognizes Brandon Scott as the mayor of Baltimore. For all other questions, the language model cannot answer the posed questions by itself due to the recency of the bridge collapse. To quantitatively evaluate the quality of the answers generated by our different question-answering methods, we require a set of ground truth answers for the questions posed. Table 2 presents these manually curated ground truth answers, representing the expected correct responses for each question used in our evalua- tion. These ground truth answers serve as the benchmark against which the performance of each system is measured. Question Ground Truth What is the name of the Bridge that collapsed and what river was it on? The Francis Scott Key Bridge on the Patapsco River. What is the name of the ship that collided with the baltimore bridge? The ship was named the Dali. How many articles did CNN publish about the Bal- timore bridge collapse? CNN published 5 articles. On what date did the Baltimore Bridge collapse? The collapse occurred on March 26, 2024. Who is Brandon Scott? Brandon Scott is the Mayor of Baltimore. Who is Niki Fennoy? Niki Fennoy is a city police spokesman. What are the top themes present in the data? Themes include maritime incidents, manmade disaster, and water-related topics. Table 2: Ground Truth Answers for the Baltimore Bridge Collapse Questions To further validate our qualitative findings, Figure 5 presents a quantitative comparison of the semantic similarity of answers generated by the five different methods when querying the GDELT dataset related to the Baltimore bridge collapse. We compute the quality of the result by calculating cosine similarity5, a measure of the similarity between the embeddings of the predicted answer and the ground truth, with higher values indicating greater accuracy. The box plots illustrate the distribution of these similarity scores for each method: Graph Query on DKG, G-Retriever on DKG, RAG using Vector Store, G-Retriever on LKG, and GraphRAG Q&A on GRKG. As our qualitative analysis suggested, methods leveraging direct graph queries on the DKG, standard RAG with a vector store, and GraphRAG Q&A on the GRKG generally exhibit the highest cosine similarity scores, providing quantitative evidence for their superior performance in generating accurate and relevant answers compared to the G-Retriever, especially with the LKG. This visualization quantitatively confirms the trends observed in the qualitative evaluation presented in Table 1. 4 Conclusion This work has taken the GDELT GKG database and introduced an ontology to create a knowledge graph with rich structure. We found that while the large KG produced by the GDELT-GKG2 database has rich information for question-answering, the ad hoc techniques for graph exploration deem further investigation for reliable use. That said, the information stored in the KG produced here is not without flaws, and we expect the example studied here to be useful for the broader synergy between KGs and LLMs as addressed by others [18]. While the KGs produced using LlamaIndex captured a summary of the events surrounding the Baltimore bridge collapse, the quality of the resulting structure was not suitable for question-answering. Those techniques incorporated into the GraphRAG package did considerably better, but there is room for improvement for answering global questions, resolving duplicate entities, and incorporating ontologically- guided relation extraction. We believe the incorporation of the ontology into the relation extraction will have the benefit of allowing tools such as GraphRAG to better answer quantitative questions that only the ontology based KG pipeline (DKG) was able to answer (e.g., the number of articles published on a topic). Large language models continue to be adapted to solve problems across domains, and the case study on the KG built here presents much opportunity for future development. In particular, the debate around using 5The semantic embeddings used to calculate the cosine similarity were generated using the sentence-transformers/all-MiniLM-L6-v2 model from the Sentence Transformers library. 9
{ "author": "Audun Myers; Max Vargas; Sinan G. Aksoy; Cliff Joslyn; Benjamin Wilson; Lee Burke; Tom Grimes", "creationDate": "", "creationdate": "", "creator": "arXiv GenPDF (tex2pdf:)", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "format": "PDF 1.5", "keywords": "", "modDate": "", "moddate": "", "page": 8, "producer": "pikepdf 8.15.1", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "subject": "", "title": "Talking to GDELT Through Knowledge Graphs", "total_pages": 12, "trapped": "" }
Figure 5: Box plots comparing the cosine similarity scores of different question-answering methods applied to the GDELT data concerning the Baltimore bridge collapse. Higher cosine similarity indicates a greater semantic similarity between the predicted and actual answers. raw documents or a curated KG should not be of ‘Either/Or’, but rather integration between these two modalities. Our findings demonstrate the significant value of combining the strengths of both approaches. We believe that by combining news articles in the form of a vector store with the rich ontological structure of the GDELT knowledge graph through an LLM interface, the resulting information extraction system would allow for better knowledge retrieval than either component alone. Specifically, the direct knowledge graph (DKG) excels at answering high-level or aggregate questions, providing a strong foundation of structured knowledge. While the vector store is well-suited for identifying local information within the documents that might be missing or less readily accessible in the DKG, GraphRAG has shown effectiveness on both local and global information. Therefore, we hypothesize that the optimal approach is the integrated system, leveraging the DKG for broad context and the vector store and GraphRAG for detailed, document-specific insights, and for enhanced global information retrieval. Further work must be done to determine the extent to which the textual article data can be used to refine the KG produced here; some of our examples showed that information stored in the DKG does not truthfully reflect the information in the articles. Conversely, we must determine how the constructed KG can be used to better search the associated vector store. Some of these research directions include the following: • Use LLMs to add new information to an existing KG by creating new entities or edge relations. Based off our observations with LlamaIndex and GraphRAG, we need careful monitoring to ensure that the LLM formats its responses to properly adhere to the ontological structures and other existing structures in the KG. To this end, it can be beneficial to adapt the triples in the DKG produced here to fine-tune the language model or guide its output with in-context learning. • By introducing RAG capabilities, we can fact check the KG against the raw textual information. For example, we found that Niki Fennoy was only mentioned in 3 articles but misattributed to 11 others. The use of LLMs give a potential avenue to fact-check existing relations. 5 Acknowledgements This work is under information release number PNNL-SA-209193. 10
{ "author": "Audun Myers; Max Vargas; Sinan G. Aksoy; Cliff Joslyn; Benjamin Wilson; Lee Burke; Tom Grimes", "creationDate": "", "creationdate": "", "creator": "arXiv GenPDF (tex2pdf:)", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "format": "PDF 1.5", "keywords": "", "modDate": "", "moddate": "", "page": 9, "producer": "pikepdf 8.15.1", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "subject": "", "title": "Talking to GDELT Through Knowledge Graphs", "total_pages": 12, "trapped": "" }
References [1] Knowledge graph index. https://docs.llamaindex.ai/en/stable/examples/index_structs/ knowledge_graph/KnowledgeGraphDemo/. Accessed: 2024-07-22. [2] Rawan Alamro, Andrew McCarren, and Amal Al-Rasheed. Predicting saudi stock market index by in- corporating gdelt using multivariate time series modelling. In Advances in Data Science, Cyber Security and IT Applications: First International Conference on Computing, ICC 2019, Riyadh, Saudi Arabia, December 10–12, 2019, Proceedings, Part I 1, pages 317–328. Springer, 2019. [3] Diego Bodas-Sagi and Jos´e Labeaga. Using gdelt data to evaluate the confidence on the spanish gov- ernment energy policy. 2016. [4] Emina Boudemagh and Izabela Moise. News media coverage of refugees in 2016: a gdelt case study. In Proceedings of the International AAAI Conference on Web and Social Media, volume 11, pages 743–750, 2017. [5] Darren Edge, Ha Trinh, Newman Cheng, Joshua Bradley, Alex Chao, Apurva Mody, Steven Truitt, and Jonathan Larson. From local to global: A graph rag approach to query-focused summarization, 2024. [6] Divyanshi Galla and James Burke. Predicting social unrest using gdelt. In International conference on machine learning and data mining in pattern recognition, pages 103–116. Springer, 2018. [7] Xiaoxin He, Yijun Tian, Yifei Sun, Nitesh V Chawla, Thomas Laurent, Yann LeCun, Xavier Bresson, and Bryan Hooi. G-retriever: Retrieval-augmented generation for textual graph understanding and question answering. arXiv preprint arXiv:2402.07630, 2024. [8] Aidan Hogan, Eva Blomqvist, Michael Cochez, C D’Amato, Gerard De Melo, Claudio Gutierrez, Sabrina Kirrane, Jos Gayo, Roberto Navigli, Sebastian Neumaier, Axel-Cyrille Ngonga Ngomo, Axel Polleres, Sabbir M Rashid, Anisa Rula, Lukas Schmelzeisen, Juan Sequeda, Steffen Staab, and Antoine Zimmer- mann. Knowledge graphs. ACM Computing Surveys, 54:4:1–37, 2021. [9] Himarsha R Jayanetti, Erika Frydenlund, and Michele C Weigle. Exploring xenophobic events through gdelt data analysis. arXiv preprint arXiv:2305.01708, 2023. [10] Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, L´elio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timoth´ee Lacroix, and William El Sayed. Mistral 7b, 2023. [11] Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, L´elio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Th´eophile Gervet, Thibaut Lavril, Thomas Wang, Timoth´ee Lacroix, and William El Sayed. Mixtral of experts, 2024. [12] Swetha Keertipati, Bastin Tony Roy Savarimuthu, Maryam Purvis, and Martin Purvis. Multi-level analysis of peace and conflict data in gdelt. In Proceedings of the MLSDA 2014 2nd Workshop on Machine Learning for Sensory Data Analysis, pages 33–40, 2014. [13] Kalev Leetaru and Philip A Schrodt. Gdelt: Global data on events, location, and tone, 1979–2012. In ISA annual convention, volume 2, pages 1–49. Citeseer, 2013. [14] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K¨uttler, Mike Lewis, Wen-tau Yih, Tim Rockt¨aschel, Sebastian Riedel, and Douwe Kiela. Retrieval-augmented generation for knowledge-intensive nlp tasks. In Proceedings of the 34th Interna- tional Conference on Neural Information Processing Systems, NIPS ’20, Red Hook, NY, USA, 2020. Curran Associates Inc. 11
{ "author": "Audun Myers; Max Vargas; Sinan G. Aksoy; Cliff Joslyn; Benjamin Wilson; Lee Burke; Tom Grimes", "creationDate": "", "creationdate": "", "creator": "arXiv GenPDF (tex2pdf:)", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "format": "PDF 1.5", "keywords": "", "modDate": "", "moddate": "", "page": 10, "producer": "pikepdf 8.15.1", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "subject": "", "title": "Talking to GDELT Through Knowledge Graphs", "total_pages": 12, "trapped": "" }
[15] Costas Mavromatis and George Karypis. Gnn-rag: Graph neural retrieval for large language model reasoning, 2024. [16] Innocensia Owuor and Hartwig H Hochmair. Temporal relationship between daily reports of covid-19 infections and related gdelt and tweet mentions. Geographies, 3(3):584–609, 2023. [17] Innocensia Owuor, Hartwig H Hochmair, and Sreten Cvetojevic. Tracking hurricane dorian in gdelt and twitter. AGILE: GIScience Series, 1:19, 2020. [18] Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, and Xindong Wu. Unifying large language models and knowledge graphs: A roadmap. IEEE Transactions on Knowledge and Data Engineering, 36(7):3580–3599, July 2024. [19] Fengcai Qiao and Kedi Chen. Predicting protest events with hidden markov models. In 2016 Interna- tional Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC), pages 109–114. IEEE, 2016. [20] Fengcai Qiao, Pei Li, Xin Zhang, Zhaoyun Ding, Jiajun Cheng, and Hui Wang. Predicting social unrest events with hidden markov models using gdelt. Discrete Dynamics in Nature and Society, 2017(1):8180272, 2017. [21] Juan F Sequeda, Syed H Tirmizi, Oscar Corcho, and Daniel P Miranker. Direct mapping sql databases to the semantic web: A survey. Knowledge Engineering Review, 26:4:445–486, 2011. doi:10.1017/S0269888911000208. [22] Llama-3 Team. The llama 3 herd of models, 2024. [23] Milena Trajanoska, Riste Stojanov, and Dimitar Trajanov. Enhancing knowledge graph construction using large language models, 2023. [24] Vasiliki Voukelatou, Luca Pappalardo, Ioanna Miliou, Lorenzo Gabrielli, and Fosca Giannotti. Estimat- ing countries’ peace index through the lens of the world news as monitored by gdelt. In 2020 IEEE 7th international conference on data science and advanced analytics (DSAA), pages 216–225. IEEE, 2020. [25] Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Ma- jumder, and Furu Wei. Text embeddings by weakly-supervised contrastive pre-training. arXiv preprint arXiv:2212.03533, 2022. [26] Congyu Wu and Matthew S Gerber. Forecasting civil unrest using social media and protest participation theory. IEEE Transactions on Computational Social Systems, 5(1):82–94, 2017. [27] Zhentao Xu, Mark Jerome Cruz, Matthew Guevara, Tie Wang, Manasi Deshpande, Xiaofeng Wang, and Zheng Li. Retrieval-augmented generation with knowledge graphs for customer service question answer- ing. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2024. ACM, July 2024. [28] Liang Yao, Jiazhen Peng, Chengsheng Mao, and Yuan Luo. Exploring large language models for knowl- edge graph completion, 2024. [29] James E Yonamine. A nuanced study of political conflict using the Global Datasets of Events Location and Tone (GDELT) dataset. The Pennsylvania State University, 2013. [30] James E Yonamine. Predicting future levels of violence in afghanistan districts using gdelt. Unpublished manuscript, 2013. [31] Zhaocheng Zhu, Xinyu Yuan, Mikhail Galkin, Sophie Xhonneux, Ming Zhang, Maxime Gazeau, and Jian Tang. A*net: A scalable path-based reasoning approach for knowledge graphs, 2023. 12
{ "author": "Audun Myers; Max Vargas; Sinan G. Aksoy; Cliff Joslyn; Benjamin Wilson; Lee Burke; Tom Grimes", "creationDate": "", "creationdate": "", "creator": "arXiv GenPDF (tex2pdf:)", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "format": "PDF 1.5", "keywords": "", "modDate": "", "moddate": "", "page": 11, "producer": "pikepdf 8.15.1", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/2503.07584v3.pdf", "subject": "", "title": "Talking to GDELT Through Knowledge Graphs", "total_pages": 12, "trapped": "" }
THE GDELT GLOBAL KNOWLEDGE GRAPH (GKG) DATA FORMAT CODEBOOK V2.1 2/19/2015 http://gdeltproject.org/ INTRODUCTION This codebook introduces the GDELT Global Knowledge Graph (GKG) Version 2.1, which expands GDELT’s ability to quantify global human society beyond cataloging physical occurrences towards actually representing all of the latent dimensions, geography, and network structure of the global news. It applies an array of highly sophisticated natural language processing algorithms to each document to compute a range of codified metadata encoding key latent and contextual dimensions of the document. To sum up the GKG in a single sentence, it connects every person, organization, location, count, theme, news source, and event across the planet into a single massive network that captures what’s happening around the world, what its context is and who’s involved, and how the world is feeling about it, every single day. It has been just short of sixteen months since the original prototype introduction of the GKG 1.0 system on November 3, 2013 and in those fourteen months the GKG system has found application in an incredible number and diversity of fields. The uniqueness of the GKG indicators in capturing the latent dimensions of society that precede physical unrest and their global scope has enabled truly unimaginable new applications. We’ve learned a lot over the past year in terms of the features and capabilities of greatest interest to the GKG community, and with this Version 2.1 release of the GKG, we are both integrating those new features and moving the GKG into production status (from its original alpha status) in recognition of the widespread production use of the system today. Due to the vast number of use cases articulated for the GKG, a decision was made at its release to create a raw output format that could be processed into the necessary refined formats for a wide array of software packages and analysis needs and that would support a diverse assortment of extremely complex analytic needs in a single file. Unlike the primary GDELT event stream, which is designed for direct import into major statistical packages like R, the GKG file format requires more sophisticated preprocessing and users will likely want to make use of a scripting language like PERL or Python to extract and reprocess the data for import into a statistical package. Thus, users may require more advanced text processing and scripting language skills to work with the GKG data and additional nuance may be required when thinking about how to incorporate these indicators into statistical models and network and geographic constructs, as outlined in this codebook. Encoding the GKG in XML, JSON, RDF, or other file formats significantly increases the on-disk footprint of the format due to its complexity and size (thus why the GKG is only available in CSV format), though users requiring access to the GKG in these formats can easily write a PERL or Python or similar script to translate the GKG format to any file format needed. The GKG is optimized for fast scanning, storing one record per line and using a tab- delimited format to separate the fields. This makes it possible to use highly optimized fully parallelized streamed parsing to rapidly process the GKG. Similar to the 1.0 format, the files have a “.csv” ending, despite being tab-delimited, to address issues with some software packages that cannot handle “.txt” or “.tsv” endings for parsing tasks. The new GKG format preserves most of the previous fields in their existing format for backwards compatibility (and we will continue to generate the daily Version 1.0 files in parallel into the future), but
{ "author": "Kalev Leetaru", "creationDate": "D:20150301133731-05'00'", "creationdate": "2015-03-01T13:37:31-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "format": "PDF 1.6", "keywords": "", "modDate": "D:20250201021147-08'00'", "moddate": "2025-02-01T02:11:47-08:00", "page": 0, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "subject": "", "title": "", "total_pages": 15, "trapped": "" }
adds a series of new capabilities that greatly enhance what can be done with the GKG data, opening entirely new analytic opportunities. Some of the most significant changes:  Realtime Measurement of 2,300 Emotions and Themes. The GDELT Global Content Analysis Measures (GCAM) module represents what we believe is the largest deployment of sentiment analysis in the world: bringing together 24 emotional measurement packages that together assess more than 2,300 emotions and themes from every article in realtime, multilingual dimensions natively assessing the emotions of 15 languages (Arabic, Basque, Catalan, Chinese, French, Galician, German, Hindi, Indonesian, Korean, Pashto, Portuguese, Russian, Spanish, and Urdu). GCAM is designed to enable unparalleled assessment of the emotional undercurrents and reaction at a planetary scale by bringing together an incredible array of dimensions, from LIWC’s “Anxiety” to Lexicoder’s “Positivity” to WordNet Affect’s “Smugness” to RID’s “Passivity”.  Realtime Translation of 65 Languages. GDELT 2.0 brings with it the public debut of GDELT Translingual, representing what we believe is the largest realtime streaming news machine translation deployment in the world: all global news that GDELT monitors in 65 languages, representing 98.4% of its daily non-English monitoring volume, is translated in realtime into English for processing through the entire GDELT Event and GKG/GCAM pipelines. GDELT Translingual is designed to allow GDELT to monitor the entire planet at full volume, creating the very first glimpses of a world without language barriers. The GKG system now processes every news report monitored by GDELT across these 65 languages, making it possible to trace people, organizations, locations, themes, and emotions across languages and media systems.  Relevant Imagery, Videos, and Social Embeds. A large fraction of the world’s news outlets now specify a hand-selected image for each article to appear when it is shared via social media that represents the core focus of the article. GDELT identifies this imagery in a wide array of formats including Open Graph, Twitter Cards, Google+, IMAGE_SRC, and SailThru formats. In addition, GDELT also uses a set of highly specialized algorithms to analyze the article content itself to identify inline imagery of high likely relevance to the story, along with videos and embedded social media posts (such as embedded Tweets or YouTube or Vine videos), a list of which is compiled. This makes it possible to gain a unique ground-level view into emerging situations anywhere in the world, even in those areas with very little social media penetration, and to act as a kind of curated list of social posts in those areas with strong social use.  Quotes, Names, and Amounts. The world’s news contains a wealth of information on food prices, aid promises, numbers of troops, tanks, and protesters, and nearly any other countable item. GDELT 2.0 now attempts to compile a list of all “amounts” expressed in each article to offer numeric context to global events. In parallel, a new Names engine augments the existing Person and Organization names engines by identifying an array of other kinds of proper names, such as named events (Orange Revolution / Umbrella Movement), occurrences like the World Cup, named dates like Holocaust Remembrance Day, on through named legislation like Iran Nuclear Weapon Free Act, Affordable Care Act and Rouge National Urban Park Initiative. Finally, GDELT also identifies attributable quotes from each article, making it possible to see the evolving language used by political leadership across the world.  Date Mentions. We’ve heard from many of you the desire to encode the list of date references found in news articles and documents in order to identify repeating mentions of specific dates as possible “anniversary violence” indicators. All day, month, and year dates are now extracted from each document.  Proximity Context. Perhaps the greatest change to the overall format from version 1.0 is the introduction of the new Proximity Context capability. The GKG records an enormously rich array
{ "author": "Kalev Leetaru", "creationDate": "D:20150301133731-05'00'", "creationdate": "2015-03-01T13:37:31-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "format": "PDF 1.6", "keywords": "", "modDate": "D:20250201021147-08'00'", "moddate": "2025-02-01T02:11:47-08:00", "page": 1, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "subject": "", "title": "", "total_pages": 15, "trapped": "" }
of contextual details from the news, encoding not only the people, organizations, locations and events driving the news, but also functional roles and underlying thematic context. However, with the previous GKG system it was difficult to associate those various data points together. For example, an article might record that Barack Obama, John Kerry, and Vladimir Putin all appeared somewhere in an article together and that the United States and Russia appeared in that article and that the roles of President and Secretary of State were mentioned in that article, but there was no way to associate each person with the corresponding location and functional roles. GKG 2.1 addresses this by providing the approximate character offset of each reference to an object in the original article. While not allowing for deeper semantic association, this new field allows for simple proximity-based contextualization. In the case of the example article above, the mention of United States likely occurs much closer to Barack Obama and John Kerry than to Vladimir Putin, while Secretary of State likely occurs much closer to John Kerry than to the others. In this way, critical information on role, geographic, thematic association, and other connectivity can be explored. Pilot tests have already demonstrated that these proximity indicators can be highly effective at recovering these kinds of functional, thematic, and geographic affiliations.  Over 100 New GKG Themes. There are more than 100 new themes in the GDELT Global Knowledge Graph, ranging from economic indicators like price gouging and the price of heating oil to infrastructure topics like the construction of new power generation capacity to social issues like marginalization and burning in effigy. The list of recognized infectious diseases, ethnic groups, and terrorism organizations has been considerably expanded, and more than 600 global humanitarian and development aid organizations have been added, along with global currencies and massive new taxonomies capturing global animals and plants to aid with tracking species migration and poaching.  Extensible XML Block. GDELT has historically relied primarily on mainstream news coverage for its source material. Whether from print, broadcast, or web-based mediums, news coverage across the world is relatively consistent in the kinds of information it captures. As GDELT encodes an ever-increasing range of materials, including academic journal articles and government reports, additional types of information are available to codify. As a first example of this, Leetaru, Perkins and Rewerts (2014) 1 apply the GKG to encode more than 21 billion words of academic literature, including the entire contents of JSTOR, DTIC, CORE, CireSeerX, and the Internet Archive’s 1.6 billion PDFs relevant to Africa and the Middle East. Academic literature contains a list of cited references at the bottom of each article that indicate the papers cited within that paper. This citation list is extremely valuable in constructing citation graphs over the literature to better understand trends and experts. Yet, such citation lists are unique to this class of literature and will not be found in ordinary news material and thus it would be cumbersome to add additional fields to the GKG file format to handle each of these kinds of specialized data types. Instead, the GKG now includes a special field called V2EXTRASXML that is XML formatted and includes these kinds of specialized data types that are applicable only to subsets of the collection. Moving forward, this will allow the GKG to encode highly specialized enhanced information from specialized input streams.  Unique Record Identifiers. To bring the GKG in line with the practices of the GDELT Event Database, every GKG record is now assigned a unique identifier. As with the event database, sequential identifiers do not indicate sequential events, but an identifier uniquely identifies a record across the entire collection. The addition of unique record identifiers to the GKG will make it easier to uniquely refer to a particular GKG record. 1 http://dlib.org/dlib/september14/leetaru/09leetaru.html
{ "author": "Kalev Leetaru", "creationDate": "D:20150301133731-05'00'", "creationdate": "2015-03-01T13:37:31-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "format": "PDF 1.6", "keywords": "", "modDate": "D:20250201021147-08'00'", "moddate": "2025-02-01T02:11:47-08:00", "page": 2, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "subject": "", "title": "", "total_pages": 15, "trapped": "" }
 Single Data File. Previously there were two separate GKG data files, one containing Counts only and one containing the full GKG file. The original rationale for having two separate files was that users interested only in counts could download a much smaller daily file, but in practice nearly all applications use the full GKG file in order to make use of its thematic and other data fields to contextualize those counts and to tie them into the GDELT Event Database. Thus, we are eliminating the separate counts-only file to simplify the GKG data environment.  Production Status. The GKG has now moved out of Alpha Experimental Release status and into production status. This means that the file format is now stabilized and will not change. DIFFERENCES FROM GKG 2.0 The GKG 2.0 file format debuted in September 2014 and several special subcollection datasets were released in that format. With the debut of the GKG 2.1 format in February 2015, the format has remained largely the same, but with the addition of several new fields to accommodate a number of significant enhancements to the GKG system. While it was originally intended to release these new features in the GKG 2.0 format through the V2EXTRASXML field, the integral nature of several of these fields, the desire to more closely align some of them with the format used for the Events dataset, and the need to enable structural mapping of several of the fields to a forthcoming new hierarchical representation, necessitated an upgrade to the GKG file format to the new GKG 2.1 format to accommodate these goals. Users will find that code designed for the GKG 2.0 format can be adapted to the GKG 2.1 format with minimal modification. Since the GKG 2.0 format was only used for a handful of special subcollection datasets and never made an appearance for the daily news content, a GKG 2.0 compatibility feed will not be made available and only the GKG 1.0 and GKG 2.1 formats will be supported for news content. From a conceptual standpoint, two critical differences between the GKG 2.1/2.0 format and the GKG 1.0 revolve around how entries are clustered and the minimum criteria for an article to be included in the GKG stream. Under the GKG 1.0 format, a deduplication process similar to that used for the Event stream was applied to the daily GKG export, grouping together all articles yielding the same GKG metadata. Thus, two articles listing the same set of locations, themes, people, and organizations would be grouped together in a single row with NumArticles holding a value of 2. With the introduction of the new GCAM system that assess more than 2,300 emotions and themes for each article, it became clear that the GKG 1.0 approach would no longer work, since multiple articles yielding the same locations, themes, people, and organizations might use very different language to discuss them, yielding very different GCAM scores. In addition, the introduction of realtime translation into the GDELT architecture necessitated the ability to identify the provenance of metadata at the document level. Thus, GKG 2.1 no longer clusters documents together based on shared metadata – if 20 articles all contain the same list of extracted locations, themes, people, and organizations, they will appear as 20 separate entries in the GKG stream. The daily GKG 1.0 compatibility stream will, however, still continue to perform clustering. In addition to the clustering change, GKG 2.1 also changes the minimum inclusion criteria for an article to appear in the GKG. Under GKG 1.0 and 2.0, an article was required to have at least one successfully identified and geocoded geographic location before it would be included in the GKG output. However, many topics monitored by GDELT, such as cybersecurity, constitutional discourse, and major policy discussions, often do not have strong geographic centering, with many articles not mentioning even a single location. This was excluding a considerable amount of content from the GKG system that is of high relevance to many GDELT user communities. Thus, beginning with GKG 2.1, an article is included in the GKG stream if it includes ANY successfully extracted information, INCLUDING GCAM emotional scores. An article that contains no recognizable geographic mentions, but lists several political leaders,
{ "author": "Kalev Leetaru", "creationDate": "D:20150301133731-05'00'", "creationdate": "2015-03-01T13:37:31-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "format": "PDF 1.6", "keywords": "", "modDate": "D:20250201021147-08'00'", "moddate": "2025-02-01T02:11:47-08:00", "page": 3, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "subject": "", "title": "", "total_pages": 15, "trapped": "" }
or mentions an argument over constitutionalism or a forthcoming policy announcement, will now be included in the GKG stream. Similarly, an article that has no recognizable metadata, but does yield GCAM emotional/thematic scores will also be included. When processing GKG 2.1 files, users should therefore be careful not to include any assumptions in their code as to whether an entry has extracted geographic information and should check the contents of this field for mapping or other geographic applications. EXTRACTED FIELDS The following section documents each of the fields contained in the GKG 2.1 format. Note: the former format had a NUMARTS field – this has been discontinued due to the new format’s support of multiple types of source collections beyond just news media and the requisite need to specify a source collection to interpret document identifiers in the new format (as discussed above). Thus, if multiple documents have identical computed metadata, in 1.0 format they would have been clustered together with NumArts used to indicate the multiple entries, while in the 2.1 format each document has a separate entry in the file. Fields prefaced with “V1” indicate they are identical in format and population to the previous GKG format. Those prefaced with “V1.5” mean they are largely similar, but have some changes. Those prefaced with “V2” are new to the format. Each row represents one document codified by the GKG and each row is tab-delimited for its major fields. Note: the “V1/V1.5/V2” designations are not included in the header row of the actual GKG output files. Note: the ordering of the fields in the file has substantially changed from Version 2.0 to Version 2.1.  GKGRECORDID. (string) Each GKG record is assigned a globally unique identifier. Unlike the EVENT system, which uses semi-sequential numbering to assign numeric IDs to each event record, the GKG system uses a date-oriented serial number. Each GKG record ID takes the form “YYYYMMDDHHMMSS-X” or “YYYYMMDDHHMMSS-TX” in which the first portion of the ID is the full date+time of the 15 minute update batch that this record was created in, followed by a dash, followed by sequential numbering for all GKG records created as part of that update batch. Records originating from a document that was translated by GDELT Translingual will have a capital “T” appearing immediately after the dash to allow filtering of English/non-English material simply by its record identifier. Thus, the fifth GKG record created as part of the update batch generated at 3:30AM on February 3, 2015 would have a GKGRECORDID of “20150203033000-5” and if it was based on a French-language document that was translated, it would have the ID “20150203033000-T5”. This ID can be used to uniquely identify this particular record across the entire GKG database. Note that due to the presence of the dash, this field should be treated as a string field and NOT as a numeric field.  V2.1DATE. (integer) This is the date in YYYYMMDDHHMMSS format on which the news media used to construct this GKG file was published. NOTE that unlike the main GDELT event stream files, this date represents the date of publication of the document from which the information was extracted – if the article discusses events in the past, the date is NOT time-shifted as it is for the GDELT event stream. This date will be the same for all rows in a file and is redundant from a data processing standpoint, but is provided to make it easier to load GKG files directly into an SQL database for analysis. NOTE: for some special collections this value may be 0 indicating that the field is either not applicable or not known for those materials. For example, OCR’d historical document collections may not have robust metadata on publication date. NOTE: the GKG 2.0 format still encoded this date in YYYYMMDD format, while under GKG 2.1 it is now in YYYYMMDDHHMMSS format.
{ "author": "Kalev Leetaru", "creationDate": "D:20150301133731-05'00'", "creationdate": "2015-03-01T13:37:31-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "format": "PDF 1.6", "keywords": "", "modDate": "D:20250201021147-08'00'", "moddate": "2025-02-01T02:11:47-08:00", "page": 4, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "subject": "", "title": "", "total_pages": 15, "trapped": "" }
 V2SOURCECOLLECTIONIDENTIFIER. (integer) This is a numeric identifier that refers to the source collection the document came from and is used to interpret the DocumentIdentifier in the next column. In essence, it specifies how to interpret the DocumentIdentifier to locate the actual document. At present, it can hold one of the following values: o 1 = WEB (The document originates from the open web and the DocumentIdentifier is a fully-qualified URL that can be used to access the document on the web). o 2 = CITATIONONLY (The document originates from a broadcast, print, or other offline source in which only a textual citation is available for the document. In this case the DocumentIdentifier contains the textual citation for the document). o 3 = CORE (The document originates from the CORE archive and the DocumentIdentifier contains its DOI, suitable for accessing the original document through the CORE website). o 4 = DTIC (The document originates from the DTIC archive and the DocumentIdentifier contains its DOI, suitable for accessing the original document through the DTIC website). o 5 = JSTOR (The document originates from the JSTOR archive and the DocumentIdentifier contains its DOI, suitable for accessing the original document through your JSTOR subscription if your institution subscribes to it). o 6 = NONTEXTUALSOURCE (The document originates from a textual proxy (such as closed captioning) of a non-textual information source (such as a video) available via a URL and the DocumentIdentifier provides the URL of the non-textual original source. At present, this Collection Identifier is used for processing of the closed captioning streams of the Internet Archive Television News Archive in which each broadcast is available via a URL, but the URL offers access only to the video of the broadcast and does not provide any access to the textual closed captioning used to generate the metadata. This code is used in order to draw a distinction between URL-based textual material (Collection Identifier 1 (WEB) and URL-based non-textual material like the Television News Archive).  V2SOURCECOMMONNAME. (text) This is a human-friendly identifier of the source of the document. For material originating from the open web with a URL this field will contain the top- level domain the page was from. For BBC Monitoring material it will contain “BBC Monitoring” and for JSTOR material it will contain “JSTOR.” This field is intended for human display of major sources as well as for network analysis of information flows by source, obviating the requirement to perform domain or other parsing of the DocumentIdentifier field.  V2DOCUMENTIDENTIFIER. (text) This is the unique external identifier for the source document. It can be used to uniquely identify the document and access it if you have the necessary subscriptions or authorizations and/or the document is public access. This field can contain a range of values, from URLs of open web resources to textual citations of print or broadcast material to DOI identifiers for various document repositories. For example, if SOURCECOLLECTION is equal to 1, this field will contain a fully-qualified URL suitable for direct access. If SOURCECOLLECTION is equal to 2, this field will contain a textual citation akin to what would appear in an academic journal article referencing that document (NOTE that the actual citation format will vary (usually between APA, Chicago, Harvard, or MLA) depending on a number of factors and no assumptions should be made on its precise format at this time due to the way in which this data is currently provided to GDELT – future efforts will focus on normalization of this field to a standard citation format). If SOURCECOLLECTION is 3, the field will contain a numeric or alpha-numeric DOI that can be typed into JSTOR’s search engine to access the document if your institution has a JSTOR subscription.  V1COUNTS. (semicolon-delimited blocks, with pound symbol (“#”) delimited fields) This is the list of Counts found in this document. Each Count found is separated with a semicolon, while
{ "author": "Kalev Leetaru", "creationDate": "D:20150301133731-05'00'", "creationdate": "2015-03-01T13:37:31-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "format": "PDF 1.6", "keywords": "", "modDate": "D:20250201021147-08'00'", "moddate": "2025-02-01T02:11:47-08:00", "page": 5, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "subject": "", "title": "", "total_pages": 15, "trapped": "" }
the fields within a Count are separated by the pound symbol (“#”). Unlike the primary GDELT event stream, these records are not issued unique identifier numbers, nor are they dated. As an example of how to interpret this file, an entry with CountType=KILL, Number=47, ObjectType=”jihadists” indicates that the article stated that 47 jihadists were killed. This field is identical in format and population as the corresponding field in the GKG 1.0 format. o Count Type. (text) This is the value of the NAME field from the Category List spreadsheet indicating which category this count is of. At the time of this writing, this is most often AFFECT, ARREST, KIDNAP, KILL, PROTEST, SEIZE, or WOUND, though other categories may appear here as well in certain circumstances when they appear in context with one of these categories, or as other Count categories are added over time. A value of “PROTEST” in this field would indicate that this is a count of the number of protesters at a protest. o Count. (integer) This is the actual count being reported. If CountType is “PROTEST” and Number is 126, this means that the source article contained a mention of 126 protesters. o Object Type. (text) This records any identifying information as to what the number refers to. For example, a mention of “20 Christian missionaries were arrested” will result in “Christian missionaries” being captured here. This field will be blank in cases where no identifying information could be identified. o Location Type. See the documentation for V1Locations below. o Location FullName. See the documentation for V1Locations below. o Location CountryCode. See the documentation for V1Locations below. o Location ADM1Code. See the documentation for V1Locations below. o Location Latitude. See the documentation for V1Locations below. o Location Longitude. See the documentation for V1Locations below. o Location FeatureID. See the documentation for V1Locations below.  V2.1COUNTS. (semicolon-delimited blocks, with pound symbol (“#”) delimited fields) This field is identical to the V1COUNTS field except that it adds a final additional field to the end of each entry that records its approximate character offset in the document, allowing it to be associated with other entries from other “V2ENHANCED” fields (or Events) that appear in closest proximity to it. Note: unlike the other location-related fields, the Counts field does NOT add ADM2 support at this time. This is to maintain compatibility with assumptions that many applications make about the contents of the Count field. Those applications needing ADM2 support for Counts should cross-reference the FeatureID field of a given Count against the V2Locations field to determine its ADM2 value.  V1THEMES. (semi-colon-delimited) This is the list of all Themes found in the document. For the complete list of possible themes, see the Category List spreadsheet. At the time of this writing there are over 275 themes currently recognized by the system. This field is identical in format and population as the corresponding field in the GKG 1.0 format.  V2ENHANCEDTHEMES. (semicolon-delimited blocks, with comma-delimited fields) This contains a list of all GKG themes referenced in the document, along with the character offsets of approximately where in the document they were found. For the complete list of possible themes, see the Category List spreadsheet. At the time of this writing there are over 300 themes currently recognized by the system. Each theme reference is separated by a semicolon, and within each reference, the name of the theme is specified first, followed by a comma, and then the approximate character offset of the reference of that theme in the document, allowing it to be associated with other entries from other “V2ENHANCED” fields that appear in closest
{ "author": "Kalev Leetaru", "creationDate": "D:20150301133731-05'00'", "creationdate": "2015-03-01T13:37:31-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "format": "PDF 1.6", "keywords": "", "modDate": "D:20250201021147-08'00'", "moddate": "2025-02-01T02:11:47-08:00", "page": 6, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "subject": "", "title": "", "total_pages": 15, "trapped": "" }
proximity to it. If a theme is mentioned multiple times in a document, each mention will appear separately in this field.  V1LOCATIONS. (semicolon-delimited blocks, with pound symbol (“#”) delimited fields) This is a list of all locations found in the text, extracted through the Leetaru (2012) algorithm. 2 The algorithm is run in a more aggressive stance here than ordinary in order to extract every possible locative referent, so may have a slightly elevated level of false positives. NOTE: some locations have multiple accepted formal or informal names and this field is collapsed on name, rather than feature (since in some applications the understanding of a geographic feature differs based on which name was used to reference it). In cases where it is necessary to collapse by feature, the Geo_FeatureID column should be used, rather than the Geo_Fullname column. This is because the Geo_Fullname column captures the name of the location as expressed in the text and thus reflects differences in transliteration, alternative spellings, and alternative names for the same location. For example, Mecca is often spelled Makkah, while Jeddah is commonly spelled Jiddah or Jaddah. The Geo_Fullname column will reflect each of these different spellings, while the Geo_FeatureID column will resolve them all to the same unique GNS or GNIS feature identification number. For more information on the GNS and GNIS identifiers, see Leetaru (2012). 3 This field is identical in format and population as the corresponding field in the GKG 1.0 format. NOTE: there was an error in this field from 2/19/2015 through midday 3/1/2015 that caused the CountryCode field to list the wrong country code in some cases. o Location Type. (integer) This field specifies the geographic resolution of the match type and holds one of the following values: 1=COUNTRY (match was at the country level), 2=USSTATE (match was to a US state), 3=USCITY (match was to a US city or landmark), 4=WORLDCITY (match was to a city or landmark outside the US), 5=WORLDSTATE (match was to an Administrative Division 1 outside the US – roughly equivalent to a US state). This can be used to filter counts by geographic specificity, for example, extracting only those counts with a landmark-level geographic resolution for mapping. Note that matches with codes 1 (COUNTRY), 2 (USSTATE), and 5 (WORLDSTATE) will still provide a latitude/longitude pair, which will be the centroid of that country or state, but the FeatureID field below will contain its textual country or ADM1 code instead of a numeric featureid. o Location FullName. (text) This is the full human-readable name of the matched location. In the case of a country it is simply the country name. For US and World states it is in the format of “State, Country Name”, while for all other matches it is in the format of “City/Landmark, State, Country”. This can be used to label locations when placing counts on a map. Note: this field reflects the precise name used to refer to the location in the text itself, meaning it may contain multiple spellings of the same location – use the FeatureID column to determine whether two location names refer to the same place. o Location CountryCode. (text) This is the 2-character FIPS10-4 country code for the location. Note: GDELT continues to use the FIPS10-4 codes under USG guidance while GNS continues its formal transition to the successor Geopolitical Entities, Names, and Codes (GENC) Standard (the US Government profile of ISO 3166). o Location ADM1Code. (text) This is the 2-character FIPS10-4 country code followed by the 2-character FIPS10-4 administrative division 1 (ADM1) code for the administrative division housing the landmark. In the case of the United States, this is the 2-character 2 http://www.dlib.org/dlib/september12/leetaru/09leetaru.html 3 http://www.dlib.org/dlib/september12/leetaru/09leetaru.html
{ "author": "Kalev Leetaru", "creationDate": "D:20150301133731-05'00'", "creationdate": "2015-03-01T13:37:31-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "format": "PDF 1.6", "keywords": "", "modDate": "D:20250201021147-08'00'", "moddate": "2025-02-01T02:11:47-08:00", "page": 7, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "subject": "", "title": "", "total_pages": 15, "trapped": "" }
shortform of the state’s name (such as “TX” for Texas). Note: see the notice above for CountryCode regarding the FIPS10-4 / GENC transition. Note: to obtain ADM2 (district- level) assignments for locations, you can either perform a spatial join against a ShapeFile template in any GIS software, or cross-walk the FeatureID to the GNIS/GNS databases – this will provide additional fields such as ADM2 codes and MGRS grid references for GNS. o Location Latitude. (floating point number) This is the centroid latitude of the landmark for mapping. In the case of a country or administrative division this will reflect the centroid of that entire country/division. o Location Longitude. (floating point number) This is the centroid longitude of the landmark for mapping. In the case of a country or administrative division this will reflect the centroid of that entire country/division. o Location FeatureID. (text OR signed integer) This is the numeric GNS or GNIS FeatureID for this location OR a textual country or ADM1 code. More information on these values can be found in Leetaru (2012).4 Note: This field will be blank or contain a textual ADM1 code for country or ADM1-level matches – see above. Note: For numeric GNS or GNIS FeatureIDs, this field can contain both positive and negative numbers, see Leetaru (2012) for more information on this.  V2ENHANCEDLOCATIONS. (semicolon-delimited blocks, with pound symbol (“#”) delimited fields) This field is identical to the V1LOCATIONS field with the primary exception of an extra field appended to the end of each location block after its FeatureID that lists the approximate character offset of the reference to that location in the text. In addition, if a location appears multiple times in the article, it will be listed multiple times in this field. The only other modification from V1LOCATIONS is the addition of a single new field “Location ADM2Code” in between “Location ADM1Code” and “Location Latitude”. 5 NOTE: there was an error in this field from 2/19/2015 through midday 3/1/2015 that caused the CountryCode field to list the wrong country code in some cases.  V1PERSONS. (semicolon-delimited) This is the list of all person names found in the text, extracted through the Leetaru (2012) algorithm. 6 This name recognition algorithm is unique in that it is specially designed to recognize the African, Asian, and Middle Eastern names that yield significantly reduced accuracy with most name recognition engines. This field is identical in format and population as the corresponding field in the GKG 1.0 format.  V2ENHANCEDPERSONS. (semicolon-delimited blocks, with comma-delimited fields) This contains a list of all person names referenced in the document, along with the character offsets of approximately where in the document they were found. Each person reference is separated by a semicolon, and within each reference, the person name is specified first, followed by a comma, and then the approximate character offset of the reference of that person in the document, allowing it to be associated with other entries from other “V2ENHANCED” fields that appear in closest proximity to it. If a person is mentioned multiple times in a document, each mention will appear separately in this field.  V1ORGANIZATIONS. (semicolon-delimited) This is the list of all company and organization names found in the text, extracted through the Leetaru (2012) algorithm. 7 This is a combination of corporations, IGOs, NGOs, and any other local organizations such as a local fair 4 http://www.dlib.org/dlib/september12/leetaru/09leetaru.html 5 http://blog.gdeltproject.org/global-second-order-administrative-divisions-now-available-from-gaul/ 6 http://www.dlib.org/dlib/september12/leetaru/09leetaru.html 7 http://www.dlib.org/dlib/september12/leetaru/09leetaru.html
{ "author": "Kalev Leetaru", "creationDate": "D:20150301133731-05'00'", "creationdate": "2015-03-01T13:37:31-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "format": "PDF 1.6", "keywords": "", "modDate": "D:20250201021147-08'00'", "moddate": "2025-02-01T02:11:47-08:00", "page": 8, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "subject": "", "title": "", "total_pages": 15, "trapped": "" }
or council. This engine is highly adaptive and is currently tuned to err on the side of inclusion when it is less confident about a match to ensure maximal recall of smaller organizations around the world that are of especial interest to many users of the GKG. Conversely, certain smaller companies with names and contexts that do not provide a sufficient recognition latch may be missed or occasionally misclassified as a person name depending on context. It is highly recommended that users of the Persons and Organizations fields histogram the results and discard names appearing just once or twice to eliminate most of these false positive matches. This field is identical in format and population as the corresponding field in the GKG 1.0 format.  V2ENHANCEDORGANIZATIONS. (semicolon-delimited blocks, with comma-delimited fields) This contains a list of all organizations/companies referenced in the document, along with the character offsets of approximately where in the document they were found. Each organization reference is separated by a semicolon, and within each reference, the name of the organization is specified first, followed by a comma, and then the approximate character offset of the reference of that organization in the document, allowing it to be associated with other entries from other “V2ENHANCED” fields that appear in closest proximity to it. If an organization is mentioned multiple times in a document, each mention will appear separately in this field.  V1.5TONE. (comma-delimited floating point numbers) This field contains a comma-delimited list of six core emotional dimensions, described in more detail below. Each is recorded as a single precision floating point number. This field is nearly identical in format and population as the corresponding field in the GKG 1.0 format with the sole exception of adding the single new WordCount variable at the end. o Tone. (floating point number) This is the average “tone” of the document as a whole. The score ranges from -100 (extremely negative) to +100 (extremely positive). Common values range between -10 and +10, with 0 indicating neutral. This is calculated as Positive Score minus Negative Score. Note that both Positive Score and Negative Score are available separately below as well. A document with a Tone score close to zero may either have low emotional response or may have a Positive Score and Negative Score that are roughly equivalent to each other, such that they nullify each other. These situations can be detected either through looking directly at the Positive Score and Negative Score variables or through the Polarity variable. o Positive Score. (floating point number) This is the percentage of all words in the article that were found to have a positive emotional connotation. Ranges from 0 to +100. o Negative Score. (floating point number) This is the percentage of all words in the article that were found to have a positive emotional connotation. Ranges from 0 to +100. o Polarity. (floating point number) This is the percentage of words that had matches in the tonal dictionary as an indicator of how emotionally polarized or charged the text is. If Polarity is high, but Tone is neutral, this suggests the text was highly emotionally charged, but had roughly equivalent numbers of positively and negatively charged emotional words. o Activity Reference Density. (floating point number) This is the percentage of words that were active words offering a very basic proxy of the overall “activeness” of the text compared with a clinically descriptive text. o Self/Group Reference Density. (floating point number) This is the percentage of all words in the article that are pronouns, capturing a combination of self-references and group-based discourse. News media material tends to have very low densities of such language, but this can be used to distinguish certain classes of news media and certain contexts.
{ "author": "Kalev Leetaru", "creationDate": "D:20150301133731-05'00'", "creationdate": "2015-03-01T13:37:31-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "format": "PDF 1.6", "keywords": "", "modDate": "D:20250201021147-08'00'", "moddate": "2025-02-01T02:11:47-08:00", "page": 9, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "subject": "", "title": "", "total_pages": 15, "trapped": "" }
o Word Count. (integer) This is the total number of words in the document. This field was added in version 1.5 of the format.  V2.1ENHANCEDDATES. (semicolon-delimited blocks, with comma-delimited fields) This contains a list of all date references in the document, along with the character offsets of approximately where in the document they were found. If a date was mentioned multiple times in a document, it will appear multiple times in this field, once for each mention. Each date reference is separated by a semicolon, while the fields within a date are separated by commas. NOTE: this field is identical to GKG 2.0 with the sole exception of the addition of one additional Date Resolution type (4 = dates that include a month and day, but not a year). o Date Resolution. This indicates whether the date was a month-day date that did not specify a year (4), a fully-resolved day-level date that included the year (3), a month- level date that included the year but not a day (2), or a year-level (1) date that did not include month or day-level information. o Month. This is the month of the date represented as 1-12. For year dates this field will contain a 0. o Day. This is the day of the date represented as 1-31. For month and year dates this field will contain a 0. o Year. This is the year of the date. For Resolution=4 dates that include a month and day, but not a year, this field will contain a 0. o Offset. This is the character offset of the date within the document, indicating approximately where it was found in the body. This can be used to associate the date with the entries from other “V2ENHANCED” fields that appeared in closest proximity to it.  V2GCAM. (comma-delimited blocks, with colon-delimited key/value pairs) The Global Content Analysis Measures (GCAM) system runs an array of content analysis systems over each document and compiles their results into this field. New content analysis systems will be constantly added to the GCAM pipeline over time, meaning the set of available fields will constantly grow over time. Given that the GCAM system is debuting with over 2,300 dimensions and will likely grow to include several thousand more dimensions within the coming months, it differs in its approach to encoding matches from the GKG’s native thematic coding system. Instead of displaying the full English name of a content analysis dictionary or dimension, it assigns each dictionary a unique numeric identifier (DictionaryID) and each dimension within that dictionary is assigned a unique identifier from 1 to the number of dimensions in the dictionary (DimensionID). Each dimension of each dictionary is assessed on a document and ONLY those dimensions that had one or more matches onto the document are reported. If a dimension did not register any matches on a document, it is not reported in order to save space. Thus, the absence of a dimension in this field can be interpreted as a score of 0. Each dimension’s score is written to the V2GCAM field separated by a comma. For each dimension, a numeric “key” identifies it of the form “DictionaryID.DimensionID”, followed by a colon, followed by its score. Most dictionaries are count-based, meaning they report how many words in the document were found in that dictionary. Thus, a score of 18 would mean that 18 words from the document were found in that dictionary. Count-based dimensions have a key that begins with “c”. Some dictionaries, such as SentiWordNet and SentiWords actually assign each word a numeric score and the output of the tool is the average of those scores for that document. For those dictionaries, an entry will report the number of words in the document that matched into that dictionary, and a separate entry, beginning with a “v” instead of a “c” will report its floating-point average value. The very first entry in the field has the special reserved key of “wc” and reports the total number of words in the document – this can be used to divide
{ "author": "Kalev Leetaru", "creationDate": "D:20150301133731-05'00'", "creationdate": "2015-03-01T13:37:31-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "format": "PDF 1.6", "keywords": "", "modDate": "D:20250201021147-08'00'", "moddate": "2025-02-01T02:11:47-08:00", "page": 10, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "subject": "", "title": "", "total_pages": 15, "trapped": "" }
the score of any word-count field to convert to a percentage density score. As an example, assume a document with 125 words. The General Inquirer dictionary has been assigned the DictionaryID of 2 and its “Bodypt” dimension has a DimensionID of 21. SentiWordNet has a DictionaryID of 10 and its “Positive” dimension has a DimensionID of 1. Thus, the V2GCAM field for a document might look like “wc:125,c2.21:4,c10.1:40,v10.1:3.21111111” indicating that the document had 125 words, that 4 of those words were found the General Inquirer “Bodypt” lexicon, that 40 of those words were found in the SentiWordNet lexicon, and that the average numeric score of all of the words found in the SentiWordNet lexicon was 3.21111111. For a complete list of the available dimensions, along with their assigned DictionaryID and DimensionID codes, their assigned key, and their human name and full citation to cite that dimension, please see the GCAM Master Codebook. 8 NOTE: the scores for all dimensions, both English and non-English dimensions, will be listed together in this field – please see the codebooks to determine the source language of a specific dimension. NOTE: if non-English dictionaries are available for a given language and generated at least one match for that document, an additional “nwc” entry will be added which reflects the word count in the native language, since languages may have differing word counts in their native and translated forms. This count will be absent if no native dictionaries yielded a match for the document.  V2.1SHARINGIMAGE. (textual URL) Many news websites specify a so-called “sharing image” for each article in which the news outlet manually specifies a particular image to be displayed when that article is shared via social media or other formats. Not all news outlets specify a sharing image and some sites simply use their logo, but for those that do use this field, it represents the outlet’s selection of the single image that best captures the overall focus and contents of the story. GDELT currently recognizes a variety of formats for specifying this image, including Open Graph, Twitter Cards, Google+, IMAGE_SRC, and SailThru formats, among others.  V2.1RELATEDIMAGES. (semicolon-delimited list of URLs). News articles frequently include photographs, figures, and other imagery to illustrate the story, ranging from a single illustrative photograph at top, to lengthy photo essays interspersed through the entirety of an article. Such imagery lends a rich visual tapestry to a news report, helping to clarify, for example, whether an article about a “protest blocking a highway” involves hundreds of activists along its length, or just a handful of people in one location, or whether a gas explosion leveled a building or merely shattered its windows. GDELT uses a suite of highly sophisticated algorithms to actually “read” through each article in the way a human would, evaluating each image on to determine its relevance, based on positioning, captioning, referencing, and context, and compiles a list of the URLs of the images it deems to be most relevant to the article. Thus, unrelated inset boxes, advertisements, and other imagery are ignored and this field contains only a list of images most illustrative of the core of the story. This feature is in alpha release and involves a number of highly complex algorithms working together in concert and thus may make mistakes. We will be improving this algorithm over time and would appreciate any feedback you may have on the kinds of images it incorrectly includes and those that it misses.  V2.1SOCIALIMAGEEMBEDS. (semicolon-delimited list of URLs). News websites are increasingly embedding image-based social media posts inline in their articles to illustrate them with realtime reaction or citizen reporting from the ground. GDELT currently recognizes embedded image-based Twitter and Instagram posts and records their URLs in this field. Only those posts containing imagery are included in this field. This acts as a form of social media “curation” in which news outlets are wading through the deluge of social media reaction or reporting about a specific situation and hand-selecting key image posts deemed of greatest relevance, 8 http://data.gdeltproject.org/documentation/GCAM-MASTERCODEBOOK.xlsx
{ "author": "Kalev Leetaru", "creationDate": "D:20150301133731-05'00'", "creationdate": "2015-03-01T13:37:31-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "format": "PDF 1.6", "keywords": "", "modDate": "D:20250201021147-08'00'", "moddate": "2025-02-01T02:11:47-08:00", "page": 11, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "subject": "", "title": "", "total_pages": 15, "trapped": "" }
significance, credibly, and/or interest to their audiences. Only image-based embedded posts are included in this field – videos are identified in the following field.  V2.1SOCIALVIDEOEMBEDS. (semicolon-delimited list of URLs). News websites are increasingly embedding videos inline in their articles to illustrate them with realtime reaction or citizen reporting from the ground. Some news outlets that also have television properties may cross- link their television reporting into their web-based presentation. GDELT currently recognizes YouTube, DailyMotion, Vimeo, and Vine videos embedded in articles and records their URLs in this field. Similarly to the field above, this allows for a form of social media “curation” of the videos deemed by the mainstream media to be of greatest relevance, significance, credibly, and/or interest to their audiences.  V2.1QUOTATIONS. (pound-delimited (“#”) blocks, with pipe-delimited (“|”) fields). News coverage frequently features excerpted statements from participants in an event and/or those affected by it and these quotations can offer critical insights into differing perspectives and emotions surrounding that event. GDELT identifies and extracts all quoted statements from each article and additionally attempts to identify the verb introducing the quote to help lend additional context, separating “John retorted…” from “John agreed…” to show whether the speaker was agreeing with or rejecting the statement being made. Each quoted statement is separated by a “#” character, and within each block the following fields appear, separated by pipe (“|”) symbols: o Offset. This is the character offset of the quoted statement within the document, indicating approximately where it was found in the body. This can be used to associate the date with the entries from other “V2ENHANCED” fields that appeared in closest proximity to it. o Length. This is the length of the quoted statement in characters. o Verb. This is the verb used to introduce the quote, allowing for separation of agreement versus disagreement quotes. May not be present for all quotes and not all verbs are recognized for this field. o Quote. The actual quotation itself.  V2.1ALLNAMES. (semicolon-delimited blocks, with comma-delimited fields) This field contains a list of all proper names referenced in the document, along with the character offsets of approximately where in the document they were found. Unlike the V2ENHANCEDPERSONS and V2ENHANCEDORGANIZATIONS fields, which are restricted to person and organization names, respectively, this field records ALL proper names referenced in the article, ranging from named events like the Orange Revolution, Umbrella Movement, and Arab Spring, to movements like the Civil Rights Movement, to festivals and occurrences like the Cannes Film Festival and World Cup, to named wars like World War I, to named dates like Martin Luther King Day and Holocaust Remembrance Day, to named legislation like Iran Nuclear Weapon Free Act, Affordable Care Act and Rouge National Urban Park Initiative. This field goes beyond people and organizations to capturing a much broader view of the named events, objects, initiatives, laws, and other types of names in each article. Each name reference is separated by a semicolon, and within each reference, the name is specified first, followed by a comma, and then the approximate character offset of the reference of that name in the document, allowing it to be associated with other entries from other “V2ENHANCED” fields that appear in closest proximity to it. If a name is mentioned multiple times in a document, each mention will appear separately in this field. This field is designed to be maximally inclusive and in cases of ambiguity, to err on the side of inclusion of a name.  V2.1AMOUNTS. (semicolon-delimited blocks, with comma-delimited fields) This field contains a list of all precise numeric amounts referenced in the document, along with the character
{ "author": "Kalev Leetaru", "creationDate": "D:20150301133731-05'00'", "creationdate": "2015-03-01T13:37:31-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "format": "PDF 1.6", "keywords": "", "modDate": "D:20250201021147-08'00'", "moddate": "2025-02-01T02:11:47-08:00", "page": 12, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "subject": "", "title": "", "total_pages": 15, "trapped": "" }
offsets of approximately where in the document they were found. Its primary role is to allow for rapid numeric assessment of evolving situations (such as mentions of everything from the number of affected households to the estimated dollar amount of damage to the number of relief trucks and troops being sent into the area, to the price of food and medicine in the affected zone) and general assessment of geographies and topics. Both textual and numeric formats are supported (“twenty-five trucks”, “two million displaced civilians”, “hundreds of millions of dollars”, “$1.25 billion was spent”, “75 trucks were dispatched”, “1,345 houses were affected”, “we spent $25m on it”, etc). At this time, percentages are not supported due to the large amount of additional document context required for meaningful deciphering (“reduced by 45%” is meaningless without understanding what was reduced and whether the reduction was good or bad, often requiring looking across the entire enclosing paragraph for context). This field is designed to be maximally inclusive and in cases of ambiguity, to err on the side of inclusion of an amount even if the object of the amount is more difficult to decipher. o Amount. This is the precise numeric value of the amount. Embedded commas are removed (“1,345,123” becomes 1345123), but decimal numbers are left as is (thus this field can range from a floating point number to a “long long” integer). Numbers in textual or mixed numeric-textual format (“such as “2m” or “two million” or “tens of millions”) are converted to numeric digit representation. o Object. This is the object that the amount is of or refers to. Thus, “20,000 combat soldiers” will result in “20000” in the Amount field and “combat soldiers” in this field. o Offset. This is the character offset of the quoted statement within the document, indicating approximately where it was found in the body. This can be used to associate the date with the entries from other “V2ENHANCED” fields that appeared in closest proximity to it.  V2.1TRANSLATIONINFO. (semicolon-delimited fields) This field is used to record provenance information for machine translated documents indicating the original source language and the citation of the translation system used to translate the document for processing. It will be blank for documents originally in English. At this time the field will also be blank for documents translated by a human translator and provided to GDELT in English (such as BBC Monitoring materials) – in future this field may be expanded to include information on human translation pipelines, but at present it only captures information on machine translated materials. An example of the contents of this field might be “srclc:fra; eng:Moses 2.1.1 / MosesCore Europarl fr-en / GT-FRA 1.0”. o SRCLC. This is the Source Language Code, representing the three-letter ISO639-2 code of the language of the original source material. o ENG. This is a textual citation string that indicates the engine(s) and model(s) used to translate the text. The format of this field will vary across engines and over time and no expectations should be made on the ordering or formatting of this field. In the example above, the string “Moses 2.1.1 / MosesCore Europarl fr-en / GT-FRA 1.0” indicates that the document was translated using version 2.1.1 of the Moses 9 SMT platform, using the “MosesCore Europarl fr-en” translation and language models, with the final translation enhanced via GDELT Translingual’s own version 1.0 French translation and language models. A value of “GT-ARA 1.0” indicates that GDELT Translingual’s version 1.0 Arabic translation and language models were the sole resources used for translation. Additional language systems used in the translation pipeline such as word segmentation systems are also captured in this field such that a value of “GT-ZHO 1.0 / Stanford PKU” 9 http://www.statmt.org/moses/
{ "author": "Kalev Leetaru", "creationDate": "D:20150301133731-05'00'", "creationdate": "2015-03-01T13:37:31-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "format": "PDF 1.6", "keywords": "", "modDate": "D:20250201021147-08'00'", "moddate": "2025-02-01T02:11:47-08:00", "page": 13, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "subject": "", "title": "", "total_pages": 15, "trapped": "" }
indicates that the Stanford Chinese Word Segmenter 10 was used to segment the text into individual words and sentences, which were then translated by GDELT Translingual’s own version 1.0 Chinese (Traditional or Simplified) translation and language models.  V2EXTRASXML. (special XML formatted) This field is reserved to hold special non-standard data applicable to special subsets of the GDELT collection. It is unique among the other GKG fields in that it is XML-formatted and the specific format of a given block within this field is highly customized. At the time of this writing it currently is used to hold the citations list for the academic journal article subcollection 11 and is blank for news content. o CITEDREFERENCESLIST. This block holds the results of the parsed cited references list that appeared at the bottom of the article, as extracted by the ParsCit software. 12 The ParsCit system is based on machine learning algorithms which can exhibit a significant amount of error and/or vary by source material. Within this block, each citation is enclosed in a <CITATION></CITATION> block. Within that block appear the following fields. Note: the fields may not appear in precisely this order and not all fields may be present for all citations, so parsing of this field should be flexible. For more information on the meaning of each field, please see the documentation for ParsCit. 13 This block is only available for the academic journal article subcollection.  Authors. This is a nested block with an outer set of tags of <AUTHORS> </AUTHORS> containing one or more inner blocks of <AUTHOR></AUTHOR>. Each inner block contains the name of an author of the cited paper. If a paper has multiple authors, there will be an <AUTHOR></AUTHOR> inner block for each author. Author names are order-standardized (“Leetaru, Kalev Hannes” will be normalized to “Kalev Hannes Leetaru”) but are not otherwise normalized and thus “K Leetaru”, “Kalev Leetaru”, “Kalev H. Leetaru” and “Kalev Hannes Leetaru” would all appear as distinct author entries. Applications requiring name disambiguation will need to perform that task themselves.  Title. This is the title of the cited work if it is an article.  BookTitle. This is the title of the cited work if it is a book.  Date. This is the date of the cited work.  Journal. The journal the cited work was published in.  Volume. The volume of the journal issue the cited work was published in.  Issue. The issue of the journal issue the cited work was published in.  Pages. This is the page range of the cited work.  Institution. This is the institutional affiliation of the cited work.  Publisher. The publisher of the cited work.  Location. The location of the publisher of the cited work.  Marker. This is the textual marker used to identify the work in the text (such as “Leetaru et al, 2014”). This can be used if you have access to the original article to locate references to the cited work in the article. 10 http://nlp.stanford.edu/software/segmenter.shtml 11 http://blog.gdeltproject.org/announcing-the-africa-and-middle-east-global-academic-literature-knowledge- graph-ame-gkg/ 12 http://aye.comp.nus.edu.sg/parsCit/ 13 http://aye.comp.nus.edu.sg/parsCit/
{ "author": "Kalev Leetaru", "creationDate": "D:20150301133731-05'00'", "creationdate": "2015-03-01T13:37:31-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "format": "PDF 1.6", "keywords": "", "modDate": "D:20250201021147-08'00'", "moddate": "2025-02-01T02:11:47-08:00", "page": 14, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/gdelt-gkg-2-1.pdf", "subject": "", "title": "", "total_pages": 15, "trapped": "" }
THE GDELT EVENT DATABASE DATA FORMAT CODEBOOK V2.0 2/19/2015 http://gdeltproject.org/ INTRODUCTION This codebook provides a quick overview of the fields in the GDELT Event file format and their descriptions. GDELT Event records are stored in an expanded version of the dyadic CAMEO format, capturing two actors and the action performed by Actor1 upon Actor2. A wide array of variables break out the raw CAMEO actor codes into their respective fields to make it easier to interact with the data, the Action codes are broken out into their hierarchy, the Goldstein ranking score is provided, a unique array of georeferencing fields offer estimated landmark-centroid-level geographic positioning of both actors and the location of the action, and a new “Mentions” table records the network trajectory of the story of each event “in flight” through the global media system. At present, only records from February 19, 2015 onwards are available in the GDELT 2.0 file format, however in late Spring 2015 the entire historical backfile back to 1979 will be released in the GDELT 2.0 format. The Records are stored one per line, separated by a newline (\n) and are tab-delimited (note that files have a “.csv” extension, but are actually tab-delimited). With the release of GDELT 2.0, the daily GDELT 1.0 Event files will still be generated each morning at least through the end of Spring 2015 to enable existing applications to continue to function without modification. Please note that at present, since GDELT 2.0 files are only available for events beginning February 19, 2015, you will need to use GDELT 1.0 to examine longitudinal patterns (since it stretches back to January 1, 1979) and use GDELT 2.0 moving forward for realtime events. There are now two data tables created every 15 minutes for the GDELT Event dataset. The first is the traditional Event table. This table is largely identical to the GDELT 1.0 format, but does have several changes as noted below. In addition to the Event table there is now a new Mentions table that records all mentions of each event. As an event is mentioned across multiple news reports, each of those mentions is recorded in the Mentions table, along with several key indicators about that mention, including the location within the article where the mention appeared (in the lead paragraph versus being buried at the bottom) and the “confidence” of the algorithms in their identification of the event from that specific news report. The Confidence measure is a new feature in GDELT 2.0 that makes it possible to adjust the sensitivity of GDELT towards specific use cases. Those wishing to find the earliest glimmers of breaking events or reports of very small-bore events that tend to only appear as part of period “round up” reports, can use the entire event stream, while those wishing to find only the largest events with strongly detailed descriptions, can filter the Event stream to find only those events with the highest Confidence measures. This allows the GDELT Event stream to be dynamically filtered for each individual use case (learn more about the Confidence measure below). It also makes it possible to identify the “best” news report to return for a given event (filtering all mentions of an event for those with the highest Confidence scores, most prominent positioning within the article, and/or in a specific source language – such as Arabic coverage of a protest versus English coverage of that protest).
{ "author": "Kalev Leetaru", "creationDate": "D:20150219133950-05'00'", "creationdate": "2015-02-19T13:39:50-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "format": "PDF 1.4", "keywords": "", "modDate": "D:20150219133950-05'00'", "moddate": "2015-02-19T13:39:50-05:00", "page": 0, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "subject": "", "title": "", "total_pages": 11, "trapped": "" }
EVENT TABLE EVENTID AND DATE ATTRIBUTES The first few fields of an event record capture its globally unique identifier number, the date the event took place on, and several alternatively formatted versions of the date designed to make it easier to work with the event records in different analytical software programs that may have specific date format requirements. The parenthetical after each variable name gives the datatype of that field. Note that even though GDELT 2.0 operates at a 15 minute resolution, the date fields in this section still record the date at the daily level, since this is the resolution that event analysis has historically been performed at. To examine events at the 15 minute resolution, use the DATEADDED field (the second from the last field in this table at the end).  GlobalEventID. (integer) Globally unique identifier assigned to each event record that uniquely identifies it in the master dataset. NOTE: While these will often be sequential with date, this is NOT always the case and this field should NOT be used to sort events by date: the date fields should be used for this. NOTE: There is a large gap in the sequence between February 18, 2015 and February 19, 2015 with the switchover to GDELT 2.0 – these are not missing events, the ID sequence was simply reset at a higher number so that it is possible to easily distinguish events created after the switchover to GDELT 2.0 from those created using the older GDELT 1.0 system.  Day. (integer) Date the event took place in YYYYMMDD format. See DATEADDED field for YYYYMMDDHHMMSS date.  MonthYear. (integer) Alternative formatting of the event date, in YYYYMM format.  Year. (integer) Alternative formatting of the event date, in YYYY format.  FractionDate. (floating point) Alternative formatting of the event date, computed as YYYY.FFFF, where FFFF is the percentage of the year completed by that day. This collapses the month and day into a fractional range from 0 to 0.9999, capturing the 365 days of the year. The fractional component (FFFF) is computed as (MONTH * 30 + DAY) / 365. This is an approximation and does not correctly take into account the differing numbers of days in each month or leap years, but offers a simple single-number sorting mechanism for applications that wish to estimate the rough temporal distance between dates. ACTOR ATTRIBUTES The next fields describe attributes and characteristics of the two actors involved in the event. This includes the complete raw CAMEO code for each actor, its proper name, and associated attributes. The raw CAMEO code for each actor contains an array of coded attributes indicating geographic, ethnic, and religious affiliation and the actor’s role in the environment (political elite, military officer, rebel, etc). These 3-character codes may be combined in any order and are concatenated together to form the final raw actor CAMEO code. To make it easier to utilize this information in analysis, this section breaks these codes out into a set of individual fields that can be separately queried. NOTE: all attributes in this section other than CountryCode are derived from the TABARI ACTORS dictionary and are NOT supplemented from information in the text. Thus, if the text refers to a group as “Radicalized
{ "author": "Kalev Leetaru", "creationDate": "D:20150219133950-05'00'", "creationdate": "2015-02-19T13:39:50-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "format": "PDF 1.4", "keywords": "", "modDate": "D:20150219133950-05'00'", "moddate": "2015-02-19T13:39:50-05:00", "page": 1, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "subject": "", "title": "", "total_pages": 11, "trapped": "" }
terrorists,” but the TABARI ACTORS dictionary labels that group as “Insurgents,” the latter label will be used. Use the GDELT Global Knowledge Graph to enrich actors with additional information from the rest of the article. NOTE: the CountryCode field reflects a combination of information from the TABARI ACTORS dictionary and text, with the ACTORS dictionary taking precedence, and thus if the text refers to “French Assistant Minister Smith was in Moscow,” the CountryCode field will list France in the CountryCode field, while the geographic fields discussed at the end of this manual may list Moscow as his/her location. NOTE: One of the two actor fields may be blank in complex or single-actor situations or may contain only minimal detail for actors such as “Unidentified gunmen.” GDELT currently uses the CAMEO version 1.1b3 taxonomy. For more information on what each specific code in the fields below stands for and the complete available taxonomy of the various fields below, please see the CAMEO User Manual 1 or the GDELT website for crosswalk files.2  Actor1Code. (string) The complete raw CAMEO code for Actor1 (includes geographic, class, ethnic, religious, and type classes). May be blank if the system was unable to identify an Actor1.  Actor1Name. (string) The actual name of the Actor1. In the case of a political leader or organization, this will be the leader’s formal name (GEORGE W BUSH, UNITED NATIONS), for a geographic match it will be either the country or capital/major city name (UNITED STATES / PARIS), and for ethnic, religious, and type matches it will reflect the root match class (KURD, CATHOLIC, POLICE OFFICER, etc). May be blank if the system was unable to identify an Actor1.  Actor1CountryCode. (string) The 3-character CAMEO code for the country affiliation of Actor1. May be blank if the system was unable to identify an Actor1 or determine its country affiliation (such as “UNIDENTIFIED GUNMEN”).  Actor1KnownGroupCode. (string) If Actor1 is a known IGO/NGO/rebel organization (United Nations, World Bank, al-Qaeda, etc) with its own CAMEO code, this field will contain that code.  Actor1EthnicCode. (string) If the source document specifies the ethnic affiliation of Actor1 and that ethnic group has a CAMEO entry, the CAMEO code is entered here. NOTE: a few special groups like ARAB may also have entries in the type column due to legacy CAMEO behavior. NOTE: this behavior is highly experimental and may not capture all affiliations properly – for more comprehensive and sophisticated identification of ethnic affiliation, it is recommended that users use the GDELT Global Knowledge Graph’s ethnic, religious, and social group taxonomies and post-enrich actors from the GKG.  Actor1Religion1Code. (string) If the source document specifies the religious affiliation of Actor1 and that religious group has a CAMEO entry, the CAMEO code is entered here. NOTE: a few special groups like JEW may also have entries in the geographic or type columns due to legacy CAMEO behavior. NOTE: this behavior is highly experimental and may not capture all affiliations properly – for more comprehensive and sophisticated identification of ethnic affiliation, it is recommended that users use the GDELT Global Knowledge Graph’s ethnic, religious, and social group taxonomies and post-enrich actors from the GKG.  Actor1Religion2Code. (string) If multiple religious codes are specified for Actor1, this contains the secondary code. Some religion entries automatically use two codes, such as Catholic, which invokes Christianity as Code1 and Catholicism as Code2.  Actor1Type1Code. (string) The 3-character CAMEO code of the CAMEO “type” or “role” of Actor1, if specified. This can be a specific role such as Police Forces, Government, Military, Political Opposition, Rebels, etc, a broad role class such as Education, Elites, Media, Refugees, or 1 http://gdeltproject.org/data/documentation/CAMEO.Manual.1.1b3.pdf 2 http://gdeltproject.org/
{ "author": "Kalev Leetaru", "creationDate": "D:20150219133950-05'00'", "creationdate": "2015-02-19T13:39:50-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "format": "PDF 1.4", "keywords": "", "modDate": "D:20150219133950-05'00'", "moddate": "2015-02-19T13:39:50-05:00", "page": 2, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "subject": "", "title": "", "total_pages": 11, "trapped": "" }
organizational classes like Non-Governmental Movement. Special codes such as Moderate and Radical may refer to the operational strategy of a group.  Actor1Type2Code. (string) If multiple type/role codes are specified for Actor1, this returns the second code.  Actor1Type3Code. (string) If multiple type/role codes are specified for Actor1, this returns the third code. The fields above are repeated for Actor2. The set of fields above are repeated, but each is prefaced with “Actor2” instead of “Actor1”. The definitions and values of each field are the same as above. EVENT ACTION ATTRIBUTES The following fields break out various attributes of the event “action” (what Actor1 did to Actor2) and offer several mechanisms for assessing the “importance” or immediate-term “impact” of an event. NOTE: the various fields in this section recording the amount of coverage an event has received are included solely for legacy purposes – the new Mentions table should be used instead in most cases.  IsRootEvent. (integer) The system codes every event found in an entire document, using an array of techniques to deference and link information together. A number of previous projects such as the ICEWS initiative have found that events occurring in the lead paragraph of a document tend to be the most “important.” This flag can therefore be used as a proxy for the rough importance of an event to create subsets of the event stream. NOTE: this field refers only to the first news report to mention an event and is not updated if the event is found in a different context in other news reports. It is included for legacy purposes – for more precise information on the positioning of an event, see the Mentions table.  EventCode. (string) This is the raw CAMEO action code describing the action that Actor1 performed upon Actor2. NOTE: it is strongly recommended that this field be stored as a string instead of an integer, since the CAMEO taxonomy can include zero-leaded event codes that can make distinguishing between certain event types more difficult when stored as an integer.  EventBaseCode. (string) CAMEO event codes are defined in a three-level taxonomy. For events at level three in the taxonomy, this yields its level two leaf root node. For example, code “0251” (“Appeal for easing of administrative sanctions”) would yield an EventBaseCode of “025” (“Appeal to yield”). This makes it possible to aggregate events at various resolutions of specificity. For events at levels two or one, this field will be set to EventCode. NOTE: it is strongly recommended that this field be stored as a string instead of an integer, since the CAMEO taxonomy can include zero-leaded event codes that can make distinguishing between certain event types more difficult when stored as an integer.  EventRootCode. (string) Similar to EventBaseCode, this defines the root-level category the event code falls under. For example, code “0251” (“Appeal for easing of administrative sanctions”) has a root code of “02” (“Appeal”). This makes it possible to aggregate events at various resolutions of specificity. For events at levels two or one, this field will be set to EventCode. NOTE: it is strongly recommended that this field be stored as a string instead of an integer, since the CAMEO taxonomy can include zero-leaded event codes that can make distinguishing between certain event types more difficult when stored as an integer.  QuadClass. (integer) The entire CAMEO event taxonomy is ultimately organized under four primary classifications: Verbal Cooperation, Material Cooperation, Verbal Conflict, and Material
{ "author": "Kalev Leetaru", "creationDate": "D:20150219133950-05'00'", "creationdate": "2015-02-19T13:39:50-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "format": "PDF 1.4", "keywords": "", "modDate": "D:20150219133950-05'00'", "moddate": "2015-02-19T13:39:50-05:00", "page": 3, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "subject": "", "title": "", "total_pages": 11, "trapped": "" }
Conflict. This field specifies this primary classification for the event type, allowing analysis at the highest level of aggregation. The numeric codes in this field map to the Quad Classes as follows: 1=Verbal Cooperation, 2=Material Cooperation, 3=Verbal Conflict, 4=Material Conflict.  GoldsteinScale. (floating point) Each CAMEO event code is assigned a numeric score from -10 to +10, capturing the theoretical potential impact that type of event will have on the stability of a country. This is known as the Goldstein Scale. This field specifies the Goldstein score for each event type. NOTE: this score is based on the type of event, not the specifics of the actual event record being recorded – thus two riots, one with 10 people and one with 10,000, will both receive the same Goldstein score. This can be aggregated to various levels of time resolution to yield an approximation of the stability of a location over time.  NumMentions. (integer) This is the total number of mentions of this event across all source documents during the 15 minute update in which it was first seen. Multiple references to an event within a single document also contribute to this count. This can be used as a method of assessing the “importance” of an event: the more discussion of that event, the more likely it is to be significant. The total universe of source documents and the density of events within them vary over time, so it is recommended that this field be normalized by the average or other measure of the universe of events during the time period of interest. This field is actually a composite score of the total number of raw mentions and the number of mentions extracted from reprocessed versions of each article (see the discussion for the Mentions table). NOTE: this field refers only to the first news report to mention an event and is not updated if the event is found in a different context in other news reports. It is included for legacy purposes – for more precise information on the positioning of an event, see the Mentions table.  NumSources. (integer) This is the total number of information sources containing one or more mentions of this event during the 15 minute update in which it was first seen. This can be used as a method of assessing the “importance” of an event: the more discussion of that event, the more likely it is to be significant. The total universe of sources varies over time, so it is recommended that this field be normalized by the average or other measure of the universe of events during the time period of interest. NOTE: this field refers only to the first news report to mention an event and is not updated if the event is found in a different context in other news reports. It is included for legacy purposes – for more precise information on the positioning of an event, see the Mentions table.  NumArticles. (integer) This is the total number of source documents containing one or more mentions of this event during the 15 minute update in which it was first seen. This can be used as a method of assessing the “importance” of an event: the more discussion of that event, the more likely it is to be significant. The total universe of source documents varies over time, so it is recommended that this field be normalized by the average or other measure of the universe of events during the time period of interest. NOTE: this field refers only to the first news report to mention an event and is not updated if the event is found in a different context in other news reports. It is included for legacy purposes – for more precise information on the positioning of an event, see the Mentions table.  AvgTone. (numeric) This is the average “tone” of all documents containing one or more mentions of this event during the 15 minute update in which it was first seen. The score ranges from -100 (extremely negative) to +100 (extremely positive). Common values range between -10 and +10, with 0 indicating neutral. This can be used as a method of filtering the “context” of events as a subtle measure of the importance of an event and as a proxy for the “impact” of that event. For example, a riot event with a slightly negative average tone is likely to have been a minor occurrence, whereas if it had an extremely negative average tone, it suggests a far more serious occurrence. A riot with a positive score likely suggests a very minor
{ "author": "Kalev Leetaru", "creationDate": "D:20150219133950-05'00'", "creationdate": "2015-02-19T13:39:50-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "format": "PDF 1.4", "keywords": "", "modDate": "D:20150219133950-05'00'", "moddate": "2015-02-19T13:39:50-05:00", "page": 4, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "subject": "", "title": "", "total_pages": 11, "trapped": "" }
occurrence described in the context of a more positive narrative (such as a report of an attack occurring in a discussion of improving conditions on the ground in a country and how the number of attacks per day has been greatly reduced). NOTE: this field refers only to the first news report to mention an event and is not updated if the event is found in a different context in other news reports. It is included for legacy purposes – for more precise information on the positioning of an event, see the Mentions table. NOTE: this provides only a basic tonal assessment of an article and it is recommended that users interested in emotional measures use the Mentions and Global Knowledge Graph tables to merge the complete set of 2,300 emotions and themes from the GKG GCAM system into their analysis of event records. EVENT GEOGRAPHY The final set of fields add a novel enhancement to the CAMEO taxonomy, georeferencing each event along three primary dimensions to the landmark-centroid level. To do this, the fulltext of the source document is processed using fulltext geocoding and automatic disambiguation to identify every geographic reference.3 The closest reference to each of the two actors and to the action reference are then encoded in these fields. The georeferenced location for an actor may not always match the Actor1_CountryCode or Actor2_CountryCode field, such as in a case where the President of Russia is visiting Washington, DC in the United States, in which case the Actor1_CountryCode would contain the code for Russia, while the georeferencing fields below would contain a match for Washington, DC. It may not always be possible for the system to locate a match for each actor or location, in which case one or more of the fields may be blank. The Action fields capture the location information closest to the point in the event description that contains the actual statement of action and is the best location to use for placing events on a map or in other spatial context. To find all events located in or relating to a specific city or geographic landmark, the Geo_FeatureID column should be used, rather than the Geo_Fullname column. This is because the Geo_Fullname column captures the name of the location as expressed in the text and thus reflects differences in transliteration, alternative spellings, and alternative names for the same location. For example, Mecca is often spelled Makkah, while Jeddah is commonly spelled Jiddah or Jaddah. The Geo_Fullname column will reflect each of these different spellings, while the Geo_FeatureID column will resolve them all to the same unique GNS or GNIS feature identification number. For more information on the GNS and GNIS identifiers, see Leetaru (2012). 4 When looking for events in or relating to a specific country, such as Syria, there are two possible filtering methods. The first is to use the Actor_CountryCode fields in the Actor section to look for all actors having the SYR (Syria) code. However, conflict zones are often accompanied by high degrees of uncertainty in media reporting and a news article might mention only “Unidentified gunmen stormed a house and shot 12 civilians.” In this case, the Actor_CountryCode fields for Actor1 and Actor2 would both be blank, since the article did not specify the actor country affiliations, while their Geo_CountryCode values (and the ActorGeo_CountryCode for the event) would specify Syria. This can result in dramatic differences when examining active conflict zones. The second method is to examine the ActorGeo_CountryCode for the location of the event. This will also capture situations such as the United States criticizing a statement by Russia regarding a specific Syrian attack. 3 http://www.dlib.org/dlib/september12/leetaru/09leetaru.html 4 http://www.dlib.org/dlib/september12/leetaru/09leetaru.html
{ "author": "Kalev Leetaru", "creationDate": "D:20150219133950-05'00'", "creationdate": "2015-02-19T13:39:50-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "format": "PDF 1.4", "keywords": "", "modDate": "D:20150219133950-05'00'", "moddate": "2015-02-19T13:39:50-05:00", "page": 5, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "subject": "", "title": "", "total_pages": 11, "trapped": "" }
 Actor1Geo_Type. (integer) This field specifies the geographic resolution of the match type and holds one of the following values: 1=COUNTRY (match was at the country level), 2=USSTATE (match was to a US state), 3=USCITY (match was to a US city or landmark), 4=WORLDCITY (match was to a city or landmark outside the US), 5=WORLDSTATE (match was to an Administrative Division 1 outside the US – roughly equivalent to a US state). This can be used to filter events by geographic specificity, for example, extracting only those events with a landmark-level geographic resolution for mapping. Note that matches with codes 1 (COUNTRY), 2 (USSTATE), and 5 (WORLDSTATE) will still provide a latitude/longitude pair, which will be the centroid of that country or state, but the FeatureID field below will be blank.  Actor1Geo_Fullname. (string) This is the full human-readable name of the matched location. In the case of a country it is simply the country name. For US and World states it is in the format of “State, Country Name”, while for all other matches it is in the format of “City/Landmark, State, Country”. This can be used to label locations when placing events on a map. NOTE: this field reflects the precise name used to refer to the location in the text itself, meaning it may contain multiple spellings of the same location – use the FeatureID column to determine whether two location names refer to the same place.  Actor1Geo_CountryCode. (string) This is the 2-character FIPS10-4 country code for the location.  Actor1Geo_ADM1Code. (string). This is the 2-character FIPS10-4 country code followed by the 2-character FIPS10-4 administrative division 1 (ADM1) code for the administrative division housing the landmark. In the case of the United States, this is the 2-character shortform of the state’s name (such as “TX” for Texas).  Actor1Geo_ADM2Code. (string). For international locations this is the numeric Global Administrative Unit Layers (GAUL) administrative division 2 (ADM2) code assigned to each global location, while for US locations this is the two-character shortform of the state’s name (such as “TX” for Texas) followed by the 3-digit numeric county code (following the INCITS 31:200x standard used in GNIS). For more detail on the contents and computation of this field, please see the following footnoted URL. 5 NOTE: This field may be blank/null in cases where no ADM2 information was available, for some ADM1-level matches, and for all country-level matches. NOTE: this field may still contain a value for ADM1-level matches depending on how they are codified in GNS.  Actor1Geo_Lat. (floating point) This is the centroid latitude of the landmark for mapping.  Actor1Geo_Long. (floating point) This is the centroid longitude of the landmark for mapping.  Actor1Geo_FeatureID. (string). This is the GNS or GNIS FeatureID for this location. More information on these values can be found in Leetaru (2012).6 NOTE: When Actor1Geo_Type has a value of 3 or 4 this field will contain a signed numeric value, while it will contain a textual FeatureID in the case of other match resolutions (usually the country code or country code and ADM1 code). A small percentage of small cities and towns may have a blank value in this field even for Actor1Geo_Type values of 3 or 4: this will be corrected in the 2.0 release of GDELT. NOTE: This field can contain both positive and negative numbers, see Leetaru (2012) for more information on this. These codes are repeated for Actor2 and Action, using those prefixes. DATA MANAGEMENT FIELDS 5 http://blog.gdeltproject.org/global-second-order-administrative-divisions-now-available-from-gaul/ 6 http://www.dlib.org/dlib/september12/leetaru/09leetaru.html
{ "author": "Kalev Leetaru", "creationDate": "D:20150219133950-05'00'", "creationdate": "2015-02-19T13:39:50-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "format": "PDF 1.4", "keywords": "", "modDate": "D:20150219133950-05'00'", "moddate": "2015-02-19T13:39:50-05:00", "page": 6, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "subject": "", "title": "", "total_pages": 11, "trapped": "" }
Finally, a set of fields at the end of the record provide additional data management information for the event record.  DATEADDED. (integer) This field stores the date the event was added to the master database in YYYYMMDDHHMMSS format in the UTC timezone. For those needing to access events at 15 minute resolution, this is the field that should be used in queries.  SOURCEURL. (string) This field records the URL or citation of the first news report it found this event in. In most cases this is the first report it saw the article in, but due to the timing and flow of news reports through the processing pipeline, this may not always be the very first report, but is at least in the first few reports. MENTIONS TABLE The Mentions table is a new addition to GDELT 2.0 and records each mention of the events in the Event table, making it possible to track the trajectory and network structure of a story as it flows through the global media system. Each mention of an event receives its own entry in the Mentions table – thus an event which is mentioned in 100 articles will be listed 100 times in the Mentions table. Mentions are recorded irrespective of the date of the original event, meaning that a mention today of an event from a year ago will still be recorded, making it possible to trace discussion of “anniversary events” or historical events being recontextualized into present actions. If a news report mentions multiple events, each mention is recorded separately in this table. For translated documents, all measures below are based on its English translation. Several of the new measures recorded in the Mentions table make it possible to better filter events based on how confident GDELT was in its extraction of that event. When trying to understand news media spanning the entire globe, one finds that journalism is rife with ambiguities, assumed background knowledge, and complex linguistic structures. Not every event mention will take the form of “American President Barack Obama met with Russian President Vladimir Putin yesterday at a trade summit in Paris, France.” Instead, an event mention might more commonly appear as “Obama and Putin were in Paris yesterday for a trade summit. The two leaders met backstage where he discussed his policy on Ukraine.” To which of the two leader(s) do “he” and “his” refer? Is Obama discussing Obama’s policy on Ukraine, or is Obama discussing Putin’s policy on Ukraine, or is it Putin discussing Putin’s policy or perhaps Putin discussing Obama’s policy? While additional cues may be available in the surrounding text, ambiguous event mentions like this are exceptionally common across the world’s media. Similarly, it would be difficult indeed to maintain an exhaustive list of every single political figure in the entire world and thus context is often critical for disambiguating the geographic affiliation of an actor. Even in the case of more senior political leadership, a reference to “Renauld’s press conference this afternoon in Port-au-Prince” most likely refers to Lener Renauld, the Minister of Defense of Haiti, but this disambiguation still carries with it some degree of ambiguity. GDELT makes use of an array of natural language processing algorithms like coreference and deep parsing using whole-of-document context. While these enormously increase GDELT’s ability to understand and extract ambiguous and linguistically complex events, such extractions also come with a higher potential for error. Under GDELT 1.0, the NumMentions field as designed as a composite score of the absolute number of unique documents mentioning an event and the number of revisions to the text required by these various algorithms, up to six revision passes. Under GDELT 2.0, the Mentions table now separates these, with each record in the Mentions table recording an individual mention of an event in an article, while the new Confidence field
{ "author": "Kalev Leetaru", "creationDate": "D:20150219133950-05'00'", "creationdate": "2015-02-19T13:39:50-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "format": "PDF 1.4", "keywords": "", "modDate": "D:20150219133950-05'00'", "moddate": "2015-02-19T13:39:50-05:00", "page": 7, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "subject": "", "title": "", "total_pages": 11, "trapped": "" }
records GDELT’s confidence in its extraction of that event from that particular article. This field is a percent, ranging from 10 to 100% and indicates how aggressively GDELT had to perform tasks like coreference or grammatical restructuring to extract the event from that article. Sorting all mentions of an event by this field makes it possible to identify articles featuring the strongest and most unambiguous discussion of an event.  GlobalEventID. (integer) This is the ID of the event that was mentioned in the article.  EventTimeDate. (integer) This is the 15-minute timestamp (YYYYMMDDHHMMSS) when the event being mentioned was first recorded by GDELT (the DATEADDED field of the original event record). This field can be compared against the next one to identify events being mentioned for the first time (their first mentions) or to identify events of a particular vintage being mentioned now (such as filtering for mentions of events at least one week old).  MentionTimeDate. (integer) This is the 15-minute timestamp (YYYYMMDDHHMMSS) of the current update. This is identical for all entries in the update file but is included to make it easier to load the Mentions table into a database.  MentionType. (integer) This is a numeric identifier that refers to the source collection the document came from and is used to interpret the MentionIdentifier in the next column. In essence, it specifies how to interpret the MentionIdentifier to locate the actual document. At present, it can hold one of the following values: o 1 = WEB (The document originates from the open web and the MentionIdentifier is a fully-qualified URL that can be used to access the document on the web). o 2 = CITATIONONLY (The document originates from a broadcast, print, or other offline source in which only a textual citation is available for the document. In this case the MentionIdentifier contains the textual citation for the document). o 3 = CORE (The document originates from the CORE archive and the MentionIdentifier contains its DOI, suitable for accessing the original document through the CORE website). o 4 = DTIC (The document originates from the DTIC archive and the MentionIdentifier contains its DOI, suitable for accessing the original document through the DTIC website). o 5 = JSTOR (The document originates from the JSTOR archive and the MentionIdentifier contains its DOI, suitable for accessing the original document through your JSTOR subscription if your institution subscribes to it). o 6 = NONTEXTUALSOURCE (The document originates from a textual proxy (such as closed captioning) of a non-textual information source (such as a video) available via a URL and the MentionIdentifier provides the URL of the non-textual original source. At present, this Collection Identifier is used for processing of the closed captioning streams of the Internet Archive Television News Archive in which each broadcast is available via a URL, but the URL offers access only to the video of the broadcast and does not provide any access to the textual closed captioning used to generate the metadata. This code is used in order to draw a distinction between URL-based textual material (Collection Identifier 1 (WEB) and URL-based non-textual material like the Television News Archive).  MentionSourceName. (integer) This is a human-friendly identifier of the source of the document. For material originating from the open web with a URL this field will contain the top- level domain the page was from. For BBC Monitoring material it will contain “BBC Monitoring” and for JSTOR material it will contain “JSTOR.” This field is intended for human display of major sources as well as for network analysis of information flows by source, obviating the requirement to perform domain or other parsing of the MentionIdentifier field.
{ "author": "Kalev Leetaru", "creationDate": "D:20150219133950-05'00'", "creationdate": "2015-02-19T13:39:50-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "format": "PDF 1.4", "keywords": "", "modDate": "D:20150219133950-05'00'", "moddate": "2015-02-19T13:39:50-05:00", "page": 8, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "subject": "", "title": "", "total_pages": 11, "trapped": "" }
 MentionIdentifier. (integer) This is the unique external identifier for the source document. It can be used to uniquely identify the document and access it if you have the necessary subscriptions or authorizations and/or the document is public access. This field can contain a range of values, from URLs of open web resources to textual citations of print or broadcast material to DOI identifiers for various document repositories. For example, if MentionType is equal to 1, this field will contain a fully-qualified URL suitable for direct access. If MentionType is equal to 2, this field will contain a textual citation akin to what would appear in an academic journal article referencing that document (NOTE that the actual citation format will vary (usually between APA, Chicago, Harvard, or MLA) depending on a number of factors and no assumptions should be made on its precise format at this time due to the way in which this data is currently provided to GDELT – future efforts will focus on normalization of this field to a standard citation format). If MentionType is 3, the field will contain a numeric or alpha-numeric DOI that can be typed into JSTOR’s search engine to access the document if your institution has a JSTOR subscription.  SentenceID. (integer) The sentence within the article where the event was mentioned (starting with the first sentence as 1, the second sentence as 2, the third sentence as 3, and so on). This can be used similarly to the CharOffset fields below, but reports the event’s location in the article in terms of sentences instead of characters, which is more amenable to certain measures of the “importance” of an event’s positioning within an article.  Actor1CharOffset. (integer) The location within the article (in terms of English characters) where Actor1 was found. This can be used in combination with the GKG or other analysis to identify further characteristics and attributes of the actor. NOTE: due to processing performed on each article, this may be slightly offset from the position seen when the article is rendered in a web browser.  Actor2CharOffset. (integer) The location within the article (in terms of English characters) where Actor2 was found. This can be used in combination with the GKG or other analysis to identify further characteristics and attributes of the actor. NOTE: due to processing performed on each article, this may be slightly offset from the position seen when the article is rendered in a web browser.  ActionCharOffset. (integer) The location within the article (in terms of English characters) where the core Action description was found. This can be used in combination with the GKG or other analysis to identify further characteristics and attributes of the actor. NOTE: due to processing performed on each article, this may be slightly offset from the position seen when the article is rendered in a web browser.  InRawText. (integer) This records whether the event was found in the original unaltered raw article text (a value of 1) or whether advanced natural language processing algorithms were required to synthesize and rewrite the article text to identify the event (a value of 0). See the discussion on the Confidence field below for more details. Mentions with a value of “1” in this field likely represent strong detail-rich references to an event.  Confidence. (integer) Percent confidence in the extraction of this event from this article. See the discussion above.  MentionDocLen. (integer) The length in English characters of the source document (making it possible to filter for short articles focusing on a particular event versus long summary articles that casually mention an event in passing).  MentionDocTone. (integer) The same contents as the AvgTone field in the Events table, but computed for this particular article. NOTE: users interested in emotional measures should use
{ "author": "Kalev Leetaru", "creationDate": "D:20150219133950-05'00'", "creationdate": "2015-02-19T13:39:50-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "format": "PDF 1.4", "keywords": "", "modDate": "D:20150219133950-05'00'", "moddate": "2015-02-19T13:39:50-05:00", "page": 9, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "subject": "", "title": "", "total_pages": 11, "trapped": "" }
the MentionIdentifier field above to merge the Mentions table with the GKG table to access the complete set of 2,300 emotions and themes from the GCAM system.  MentionDocTranslationInfo. (string) This field is internally delimited by semicolons and is used to record provenance information for machine translated documents indicating the original source language and the citation of the translation system used to translate the document for processing. It will be blank for documents originally in English. At this time the field will also be blank for documents translated by a human translator and provided to GDELT in English (such as BBC Monitoring materials) – in future this field may be expanded to include information on human translation pipelines, but at present it only captures information on machine translated materials. An example of the contents of this field might be “srclc:fra; eng:Moses 2.1.1 / MosesCore Europarl fr-en / GT-FRA 1.0”. NOTE: Machine translation is often not as accurate as human translation and users requiring the highest possible confidence levels may wish to exclude events whose only mentions are in translated reports, while those needing the highest- possible coverage of the non-Western world will find that these events often offer the earliest glimmers of breaking events or smaller-bore events of less interest to Western media. o SRCLC. This is the Source Language Code, representing the three-letter ISO639-2 code of the language of the original source material. o ENG. This is a textual citation string that indicates the engine(s) and model(s) used to translate the text. The format of this field will vary across engines and over time and no expectations should be made on the ordering or formatting of this field. In the example above, the string “Moses 2.1.1 / MosesCore Europarl fr-en / GT-FRA 1.0” indicates that the document was translated using version 2.1.1 of the Moses 7 SMT platform, using the “MosesCore Europarl fr-en” translation and language models, with the final translation enhanced via GDELT Translingual’s own version 1.0 French translation and language models. A value of “GT-ARA 1.0” indicates that GDELT Translingual’s version 1.0 Arabic translation and language models were the sole resources used for translation. Additional language systems used in the translation pipeline such as word segmentation systems are also captured in this field such that a value of “GT-ZHO 1.0 / Stanford PKU” indicates that the Stanford Chinese Word Segmenter 8 was used to segment the text into individual words and sentences, which were then translated by GDELT Translingual’s own version 1.0 Chinese (Traditional or Simplified) translation and language models.  Extras. (string) This field is currently blank, but is reserved for future use to encode special additional measurements for selected material. 7 http://www.statmt.org/moses/ 8 http://nlp.stanford.edu/software/segmenter.shtml
{ "author": "Kalev Leetaru", "creationDate": "D:20150219133950-05'00'", "creationdate": "2015-02-19T13:39:50-05:00", "creator": "Microsoft® Word 2010", "file_path": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "format": "PDF 1.4", "keywords": "", "modDate": "D:20150219133950-05'00'", "moddate": "2015-02-19T13:39:50-05:00", "page": 10, "producer": "Microsoft® Word 2010", "source": "/home/donbr/don-aie-cohort8/cert-challenge/data/raw/GDELT-Event_Codebook-V2.0.pdf", "subject": "", "title": "", "total_pages": 11, "trapped": "" }