dwb2023 commited on
Commit
62ba9eb
·
verified ·
1 Parent(s): 00cf4d8

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +160 -0
README.md ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ ---
4
+ # Dataset Card for dwb2023/gdelt-mentions-2025-v4
5
+
6
+ This dataset contains the mentions records from the GDELT (Global Database of Events, Language, and Tone) Project, tracking how global events are mentioned across media sources over time.
7
+
8
+ ## Dataset Details
9
+
10
+ ### Dataset Description
11
+
12
+ The GDELT Mentions table is a component of the GDELT Event Database that tracks each mention of an event across all monitored news sources. Unlike the Event table which records unique events, the Mentions table records every time an event is referenced in media, allowing researchers to track the network trajectory and media lifecycle of stories as they flow through the global information ecosystem.
13
+
14
+ - **Curated by:** The GDELT Project
15
+ - **Funded by:** Google Ideas, supported by Google Cloud Platform
16
+ - **Language(s) (NLP):** Multi-language source data, processed into standardized English format
17
+ - **License:** All GDELT data is available for free download and use with proper attribution
18
+ - **Updates:** Every 15 minutes, 24/7
19
+
20
+ ### Dataset Sources
21
+
22
+ - **Repository:** http://gdeltproject.org/
23
+ - **Documentation:** http://data.gdeltproject.org/documentation/GDELT-Event_Codebook-V2.0.pdf
24
+
25
+ ## Uses
26
+
27
+ ### Direct Use
28
+
29
+ - Tracking media coverage patterns for specific events
30
+ - Analyzing information diffusion across global media
31
+ - Measuring event importance through mention frequency
32
+ - Studying reporting biases across different media sources
33
+ - Assessing the confidence of event reporting
34
+ - Analyzing narrative framing through tonal differences
35
+ - Tracking historical event references and anniversary coverage
36
+
37
+ ### Out-of-Scope Use
38
+
39
+ - Exact source text extraction (only character offsets are provided)
40
+ - Definitive audience reach measurement (mentions don't equate to readership)
41
+ - Direct access to all mentioned source documents (URLs are provided but access may be limited)
42
+ - Language analysis of original non-English content (translation information is provided but original text is not included)
43
+
44
+ ## Dataset Structure
45
+
46
+ The dataset consists of tab-delimited files with 16 fields per mention record:
47
+
48
+ 1. Event Reference Information
49
+ - GlobalEventID: Links to the event being mentioned
50
+ - EventTimeDate: Timestamp when the event was first recorded (YYYYMMDDHHMMSS)
51
+ - MentionTimeDate: Timestamp of the mention (YYYYMMDDHHMMSS)
52
+
53
+ 2. Source Information
54
+ - MentionType: Numeric identifier for source collection (1=Web, 2=Citation, etc.)
55
+ - MentionSourceName: Human-friendly identifier (domain name, "BBC Monitoring", etc.)
56
+ - MentionIdentifier: Unique external identifier (URL, DOI, citation)
57
+
58
+ 3. Mention Context Details
59
+ - SentenceID: Sentence number within the article where the event was mentioned
60
+ - Actor1CharOffset: Character position where Actor1 was found in the text
61
+ - Actor2CharOffset: Character position where Actor2 was found in the text
62
+ - ActionCharOffset: Character position where the core Action was found
63
+ - InRawText: Whether event was found in original text (1) or required processing (0)
64
+ - Confidence: Percent confidence in the extraction (10-100%)
65
+ - MentionDocLen: Length of source document in characters
66
+ - MentionDocTone: Average tone of the document (-100 to +100)
67
+ - MentionDocTranslationInfo: Info about translation (semicolon delimited)
68
+ - Extras: Reserved for future use
69
+
70
+ ## Dataset Creation
71
+
72
+ ### Curation Rationale
73
+
74
+ The GDELT Mentions table was created to track the lifecycle of news stories and provide a deeper understanding of how events propagate through the global media ecosystem. It enables analysis of the importance of events based on coverage patterns and allows researchers to trace narrative evolution across different sources and time periods.
75
+
76
+ ### Curation Method
77
+
78
+ - Prefect based python extract script: https://gist.github.com/donbr/704789a6131bb4a92c9810185c63a16a
79
+
80
+ ### Source Data
81
+
82
+ #### Data Collection and Processing
83
+
84
+ - Every mention of an event is tracked across all monitored sources
85
+ - Each mention is recorded regardless of when the original event occurred
86
+ - Translation information is preserved for non-English sources
87
+ - Confidence scores indicate the level of natural language processing required
88
+ - Character offsets are provided to locate mentions within articles
89
+
90
+ #### Who are the source data producers?
91
+
92
+ Primary sources include:
93
+ - International news media
94
+ - Web news
95
+ - Broadcast transcripts
96
+ - Print media
97
+ - Academic repositories (with DOIs)
98
+ - Various online platforms
99
+
100
+ ### Personal and Sensitive Information
101
+
102
+ Similar to the Events table, this dataset focuses on public events and may contain:
103
+ - URLs to news articles mentioning public figures and events
104
+ - Information about how events were framed by different media outlets
105
+ - Translation metadata for non-English sources
106
+ - Document tone measurements
107
+
108
+ ## Bias, Risks, and Limitations
109
+
110
+ 1. Media Coverage Biases
111
+ - Over-representation of widely covered events
112
+ - Variance in coverage across different regions and languages
113
+ - Digital divide affecting representation of less-connected regions
114
+
115
+ 2. Technical Limitations
116
+ - Varying confidence levels in event extraction
117
+ - Translation quality differences across languages
118
+ - Character offsets may not perfectly align with rendered web content
119
+ - Not all MentionIdentifiers (URLs) remain accessible over time
120
+
121
+ 3. Coverage Considerations
122
+ - Higher representation of English and major world languages
123
+ - Potential duplication when similar articles appear across multiple outlets
124
+ - Varying confidence scores based on linguistic complexity
125
+
126
+ ### Recommendations
127
+
128
+ 1. Users should:
129
+ - Consider confidence scores when analyzing mentions
130
+ - Account for translation effects when studying non-English sources
131
+ - Use MentionDocLen to distinguish between focused coverage and passing references
132
+ - Recognize that URL accessibility may diminish over time
133
+ - Consider SentenceID to assess prominence of event mention within articles
134
+
135
+ 2. Best Practices:
136
+ - Filter by Confidence level appropriate to research needs
137
+ - Use InRawText field to identify direct versus synthesized mentions
138
+ - Analyze MentionDocTone in context with the overall event
139
+ - Account for temporal patterns in media coverage
140
+ - Cross-reference with Events table for comprehensive analysis
141
+
142
+ ## Citation
143
+
144
+ **BibTeX:**
145
+ ```bibtex
146
+ @inproceedings{leetaru2013gdelt,
147
+ title={GDELT: Global Data on Events, Language, and Tone, 1979-2012},
148
+ author={Leetaru, Kalev and Schrodt, Philip},
149
+ booktitle={International Studies Association Annual Conference},
150
+ year={2013},
151
+ address={San Francisco, CA}
152
+ }
153
+ ```
154
+
155
+ **APA:**
156
+ Leetaru, K., & Schrodt, P. (2013). GDELT: Global Data on Events, Language, and Tone, 1979-2012. Paper presented at the International Studies Association Annual Conference, San Francisco, CA.
157
+
158
+ ## Dataset Card Contact
159
+
160
+ dwb2023