File size: 4,991 Bytes
5c68e6a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
task_categories:
- text-classification
language:
- pt
tags:
- 'FakeRecogna '
- Fake News
- Portuguese
- Dataset
license: mit
size_categories:
- 10K<n<100K
---
# FakeRecogna 2.0 Extractive

FakeRecogna 2.0 presents the extension for the FakeRecogna dataset in the context of fake news detection. FakeRecogna includes real and fake news texts collected from online media and ten fact-checking sources in Brazil. An important aspect is the lack of relation between the real and fake news samples, i.e., they are not mutually related to each other to avoid intrinsic bias in the data.

## The Dataset

The fake news collection was performed on licensed and verified Brazilian news websites with enrollment in the [Duke Reporters´ Lab Center](https://reporterslab.org/fact-checking/).
The system was designed as a source to fight against fake news spreading worldwide. For real news, we selected well-known media platforms in Brazil. Since real texts are much larger than most of the produced fake content, the genuine news was preprocessed with text summarization. At this stage, there is no further processing of stop words or lemmatization of the text. After trimming and standardizing the real news, we produced textual representations based on Bag of Words (BoW), Term Frequency – Inverse Document Frequency (TF-IDF), FastText, PTT5, and BERTimbau to form the input feature vectors for the ML models. Figure illustrates the steps of the proposed method.


<!--- PROJECT LOGO -->
<p align="center">
  <img src="https://huggingface.co/datasets/recogna-nlp/FakeRecogna2/resolve/main/pipeline_proposed_method.jpg" alt="Pipeline FakeRecogna 2.0" width="600" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>


Fake news sources were selected from nine fact-checking agencies in Brazil. This process provides a broad range of categories and many fake news samples to promote data diversity. Table 1 presents the existing Brazilian fact-checking initiatives and the number of fake news samples collected from each news source. When the search process was concluded, we ended up with 26,569 fake news samples, which, in turn, were further processed to detect and remove possible duplicate samples, thus leading to a final set of 26,400 fake news articles.

| Fact-Check Agency  | Web address        		    | # News |
| ------------------ | ------------------------------------ | ------ |
| AFP Checamos       | https://checamos.afp.com/afp-brasil  | 1,587  |
| Agência Lupa       | https://piaui.folha.uol.com.br/lupa  | 3,147  |
| Aos Fatos          | https://aosfatos.org                 | 2,720  |
| Boatos.org  	     | https://boatos.org 		            | 8,654  |
| Estadão Verifica   | https://politica.estadao.com.br/blogs/estadao-verifica | 1,405 |
| E-farsas           | https://www.e-farsas.com             | 3,330  |
| Fato ou Fake       | https://oglobo.globo.com/fato-ou-fake| 2,270  |
| Projeto Comprova   | https://checamos.afp.com/afp-brasil  | 887    |
| UOL Confere        | https://noticias.uol.com.br/confere  | 2,579  |
| Total              | -------------------------------------| 26, 569|

## More informations

The FakeRecogna 2 dataset is a single XLSX file that contains 8 columns for the metadata, and each row represents a sample (real or fake news), as described in Table 2.

| Columns  		   | Description 				| 
| ------------------------ | ------------------------------------------ |
| Title              	   | Title of article             		| 
| Sub-title (if available) | Brief description of news    		|
| News                     | Information about the article 		| 
| Category                 | News grouped according to your information |
| Author       	           | Publication author    			|
| Date   		   | Publication date    			|
| URL              	   | Article web address			|
| Label			   | 0 for real news and 1 for fake news	|

### FakeRecogna v2 - Abstrative

The abstrative summarization version of FakeRecogna 2 can be found [here](https://huggingface.co/datasets/recogna-nlp/fakerecogna2-abstrativo).

# Citation


    @inproceedings{garcia-etal-2024-text,
    title = "Text Summarization and Temporal Learning Models Applied to {P}ortuguese Fake News Detection in a Novel {B}razilian Corpus Dataset",
    author = "Garcia, Gabriel Lino  and
      Paiola, Pedro Henrique  and
      Jodas, Danilo Samuel  and
      Sugi, Luis Afonso  and
      Papa, Jo{\~a}o Paulo",
    editor = "Gamallo, Pablo  and
      Claro, Daniela  and
      Teixeira, Ant{\'o}nio  and
      Real, Livy  and
      Garcia, Marcos  and
      Oliveira, Hugo Gon{\c{c}}alo  and
      Amaro, Raquel",
    booktitle = "Proceedings of the 16th International Conference on Computational Processing of Portuguese - Vol. 1",
    month = mar,
    year = "2024",
    address = "Santiago de Compostela, Galicia/Spain",
    publisher = "Association for Computational Lingustics",
    url = "https://aclanthology.org/2024.propor-1.9/",
    pages = "86--96"
    }