ReySajju742 commited on
Commit
dfac3d3
·
verified ·
1 Parent(s): 34ea086

Upload 24 files

Browse files
.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ .idea
2
+ .DS_Store
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2019 Muhammad Irfan
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
README.md CHANGED
@@ -1,12 +1,233 @@
1
- ---
2
- license: apache-2.0
3
- task_categories:
4
- - token-classification
5
- language:
6
- - ur
7
- tags:
8
- - art
9
- pretty_name: urdu-speech-tagging
10
- size_categories:
11
- - 10K<n<100K
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Summary Dataset
2
+ This a summary dataset. You can train abstractive summarization model using this dataset. It contains 3 files i.e.
3
+ `train`, `test` and `val`. Data is in `jsonl` format.
4
+
5
+ Every `line` has these keys.
6
+ ```text
7
+ id
8
+ url
9
+ title
10
+ summary
11
+ text
12
+ ```
13
+
14
+ You can easily read the data with pandas
15
+ ```python
16
+ import pandas as pd
17
+ test = pd.read_json("summary/urdu_test.jsonl", lines=True)
18
+ ```
19
+
20
+ ## POS dataset
21
+ Urdu dataset for POS training. This is a small dataset and can be used for training parts of speech tagging for Urdu Language.
22
+ Structure of the dataset is simple i.e.
23
+ ```text
24
+ word TAG
25
+ word TAG
26
+ ```
27
+
28
+ The tagset used to build dataset is taken from [Sajjad's Tagset](http://www.cle.org.pk/Downloads/langproc/UrduPOStagger/UrduPOStagset.pdf)
29
+
30
+ ## NER Datasets
31
+ Following are the datasets used for NER tasks.
32
+
33
+ ### UNER Dataset
34
+ Happy to announce that UNER (Urdu Named Entity Recognition) dataset is available for NLP apps.
35
+ Following are NER tags which are used to build the dataset:
36
+ ```text
37
+ PERSON
38
+ LOCATION
39
+ ORGANIZATION
40
+ DATE
41
+ NUMBER
42
+ DESIGNATION
43
+ TIME
44
+ ```
45
+ If you want to read more about the dataset check this paper [Urdu NER](https://www.researchgate.net/profile/Ali_Daud2/publication/312218764_Named_Entity_Dataset_for_Urdu_Named_Entity_Recognition_Task/links/5877354d08ae8fce492efe1f.pdf).
46
+ NER Dataset is in `utf-16` format.
47
+
48
+ ### MK-PUCIT Dataset
49
+ Latest for Urdu NER is available. Check this paper for more information [MK-PUCIT](https://www.researchgate.net/publication/332653135_URDU_NAMED_ENTITY_RECOGNITION_CORPUS_GENERATION_AND_DEEP_LEARNING_APPLICATIONS).
50
+
51
+ Entities used in the dataset are
52
+ ```text
53
+ Other
54
+ Organization
55
+ Person
56
+ Location
57
+ ```
58
+
59
+ `MK-PUCIT` author also provided the `Dropbox` link to download the data. [Dropbox](https://www.dropbox.com/sh/1ivw7ykm2tugg94/AAB9t5wnN7FynESpo7TjJW8la)
60
+
61
+ ### IJNLP 2008 dataset
62
+ IJNLP dataset has following NER tags.
63
+ ```text
64
+ O
65
+ LOCATION
66
+ PERSON
67
+ TIME
68
+ ORGANIZATION
69
+ NUMBER
70
+ DESIGNATION
71
+ ```
72
+
73
+ ### Jahangir dataset
74
+ Jahangir dataset has following NER tags.
75
+ ```text
76
+ O
77
+ PERSON
78
+ LOCATION
79
+ ORGANIZATION
80
+ DATE
81
+ TIME
82
+ ```
83
+
84
+
85
+ ## Datasets for Sentiment Analysis
86
+ ### IMDB Urdu Movie Review Dataset.
87
+ This dataset is taken from [IMDB Urdu](https://www.kaggle.com/akkefa/imdb-dataset-of-50k-movie-translated-urdu-reviews).
88
+ It was translated using Google Translator. It has only two labels i.e.
89
+ ```text
90
+ positive
91
+ negative
92
+ ```
93
+
94
+ ### Roman Dataset
95
+ This dataset can be used for sentiment analysis for Roman Urdu. It has 3 classes for classification.
96
+ ```textmate
97
+ Neutral
98
+ Positive
99
+ Negative
100
+ ```
101
+ If you need more information about this dataset checkout the link [Roman Urdu Dataset](https://archive.ics.uci.edu/ml/datasets/Roman+Urdu+Data+Set).
102
+
103
+ ### Products & Services dataset
104
+ This dataset is collected from different sources like social media and web for various products and services for sentiment analysis.
105
+ It contains 3 classes.
106
+ ```textmate
107
+ pos
108
+ neg
109
+ neu
110
+ ```
111
+
112
+
113
+ ### Daraz Products dataset
114
+ This dataset consists of reviews taken from Daraz. You can use it for sentiment analysis as well as spam or ham classification.
115
+ It contains following columns.
116
+ ```text
117
+ Product_ID
118
+ Date
119
+ Rating
120
+ Spam(1) and Not Spam(0)
121
+ Reviews
122
+ Sentiment
123
+ Features
124
+ ```
125
+ Dataset is taken from [kaggle daraz](https://www.kaggle.com/datasets/naveedhn/daraz-roman-urdu-reviews)
126
+
127
+
128
+ ### Urdu Dataset
129
+ Here is a small dataset for sentiment analysis. It has following classifying labels
130
+ ```textmate
131
+ P
132
+ N
133
+ O
134
+ ```
135
+ Link to the paper [Paper](https://www.researchgate.net/publication/338396518_Urdu_Sentiment_Corpus_v10_Linguistic_Exploration_and_Visualization_of_Labeled_Dataset_for_Urdu_Sentiment_Analysis)
136
+ GitHub link to data [Urdu Corpus V1](https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus)
137
+
138
+ ## News Datasets
139
+ ### Urdu News Dataset 1M
140
+ This dataset(`news/urdu-news-dataset-1M.tar.xz`) is taken from [Urdu News Dataset 1M](https://data.mendeley.com/datasets/834vsxnb99/3). It has 4 classes and can be used for classification
141
+ and other NLP tasks. I have removed unnecessary columns.
142
+ ```text
143
+ Business & Economics
144
+ Entertainment
145
+ Science & Technology
146
+ Sports
147
+ ```
148
+
149
+ ### Real-Fake News
150
+ This dataset(`news/real_fake_news.tar.gz`) is used for classification of real and fake news in [Fake News Dataset](https://github.com/MaazAmjad/Datasets-for-Urdu-news)
151
+ Dataset contains following domain news.
152
+ ```text
153
+ Technology
154
+ Education
155
+ Business
156
+ Sports
157
+ Politics
158
+ Entertainment
159
+ ```
160
+
161
+ ### News Headlines
162
+ Headlines(`news/headlines.csv.tar.gz`) dataset is taken from [Urd News Headlines](https://github.com/mwaseemrandhawa/Urdu-News-Headline-Dataset). Original dataset is in Excel format,
163
+ I've converted to csv for experiments. Can be used for clustering and classification.
164
+ ### RAW corpus and models
165
+ ## COUNTER (COrpus of Urdu News TExt Reuse) Dataset
166
+ This dataset is collected from journalism and can be used for Urdu NLP research.
167
+ Here is the link to the resource for more information
168
+ [COUNTER](http://ucrel.lancs.ac.uk/textreuse/counter.php).
169
+
170
+ ## QA datasets
171
+ I have added two qa datasets, if someone wants to use it for QA based Chatbot. QA(Ahadis): `qa_ahadis.csv` It contains qa pairs for Ahadis.
172
+
173
+ The dataset `qa_gk.csv` it contains the general knowledge QA.
174
+ ## Urdu model for SpaCy
175
+ Urdu model for SpaCy is available now. You can use it to build NLP apps easily. Install the package in your working environment.
176
+ ```shell
177
+ pip install ur_model-0.0.0.tar.gz
178
+ ```
179
+
180
+ You can use it with following code.
181
+ ```python
182
+ import spacy
183
+ nlp = spacy.load("ur_model")
184
+ doc = nlp("میں خوش ہوں کے اردو ماڈل دستیاب ہے۔ ")
185
+ ```
186
+
187
+ ### NLP Tutorials for Urdu
188
+ Checkout my articles related to Urdu NLP tasks
189
+ * POS Tagging [Urdu POS Tagging using MLP](https://www.urdunlp.com/2019/04/urdu-pos-tagging-using-mlp.html)
190
+ * NER [How to build NER dataset for Urdu language?](https://www.urdunlp.com/2019/08/how-to-build-ner-dataset-for-urdu.html), [Named Entity Recognition for Urdu](https://www.urdunlp.com/2019/05/named-entity-recognition-for-urdu.html)
191
+ * Word 2 Vector [How to build Word 2 Vector for Urdu language](https://www.urdunlp.com/2019/08/how-to-build-word-2-vector-for-urdu.html)
192
+ * Word and Sentence Similarity [Urdu Word and Sentence Similarity using SpaCy](https://www.urdunlp.com/2019/08/urdu-word-and-sentence-similarity-using.html)
193
+ * Tokenization [Urdu Tokenization using SpaCy](https://www.urdunlp.com/2019/05/urdu-tokenization-usingspacy.html)
194
+ * Urdu Language Model [How to build Urdu language model in SpaCy](https://www.urdunlp.com/2019/08/how-to-build-urdu-language-model-in.html)
195
+
196
+ These articles are available on [UrduNLP](https://www.urdunlp.com/).
197
+
198
+ ## Some Helpful Tips
199
+
200
+ ### Download Single file from GitHub
201
+ If you want to get only raw files(text or code) then use curl command i.e.
202
+ ```shell script
203
+ curl -LJO https://github.com/mirfan899/Urdu/blob/master/ner/uner.txt
204
+ ```
205
+
206
+ ### Concatenate files
207
+ ```shell script
208
+ cd data
209
+ cat */*.txt > file_name.txt
210
+ ```
211
+
212
+ ### MK-PUCIT
213
+ Concatenate files of MK-PUCIT into single file using.
214
+ ```shell script
215
+ cat */*.txt > file_name.txt
216
+ ```
217
+
218
+ Original dataset has a bug like `Others` and `Other` which are same entities, if you want to use the dataset
219
+ from `dropbox` link, use following commands to clean it.
220
+ ```python
221
+ import pandas as pd
222
+ data = pd.read_csv('ner/mk-pucit.txt', sep='\t', names={"tag", "word"})
223
+ data.tag.replace({"Others":"Other"}, inplace=True)
224
+ # save according you need as csv or txt by changing the extension
225
+ data.to_csv("ner/mk-pucit.txt", index=False, header=False, sep='\t')
226
+ ```
227
+ Now csv/txt file has format
228
+ ```text
229
+ word tag
230
+ ```
231
+
232
+ ## Note
233
+ If you have a dataset(link) and want to contribute, feel free to create PR.
_config.yml ADDED
@@ -0,0 +1 @@
 
 
1
+ theme: jekyll-theme-minimal
counter/counter.txt.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53a46cadebc355b80ace027e73084e80c681c4dc502776686cad7417073e4246
3
+ size 583151
ner/ijnlp.tar.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63c7c3e5d525b08d4f24bf2434e5148984b500801a641998e7663a304a666fbb
3
+ size 90228
ner/jahangir.tar.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef05ffaf104518fa49bf691ccc18d66aa9f4775c0514a779d68575959ac211ba
3
+ size 92284
ner/mk-pucit.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f801ab6d0572cf63c1c8927faca52dd745d9da7b839b8f5a8bcfac3d3a7e62c
3
+ size 23172345
ner/uner.txt.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf50bf3baa469eab744768cfe5bdcd8fbb05c9532a35c2adce4d17c8e3825153
3
+ size 108496
news/headlines.csv.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8b29ca3c8209047618a9a05dce9b37dfcb50a6fd0fe62753aa5c9ed4ead7431
3
+ size 10203570
news/real_fake_news.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99deb9f56b5366efb6013285417ae5ff1fc4791d682c51216658fea00b9f161f
3
+ size 677359
news/urdu-news-dataset-1M.tar.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df038a232a96b41af548c8b3ffa57b0ca4e8f5bb3e2bcf88cbb8000dc07c39df
3
+ size 37775640
pos/test.txt.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f1361762292f753954714db1eecfdb2e71e156e7f161f201e529ba334e803a3
3
+ size 243003
pos/train.txt.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7030e7d0018d60d100f0780a9bc3f9af69df3cb5fa9a4938818363ca9c57a912
3
+ size 9689595
qa/qa_ahadis.csv ADDED
The diff for this file is too large to render. See raw diff
 
qa/qa_gk.csv ADDED
The diff for this file is too large to render. See raw diff
 
sentiment/daraz_products_reviews.csv.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f122493dd8b61d2a9f6215e38ca97eff226bacee7033489e18de70df37bebe1d
3
+ size 264135
sentiment/imdb_urdu_reviews.csv.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f60f7e9972661dc5d8ec1c867972ae35f86dac32de43a274a2a794095dccdf99
3
+ size 31510992
sentiment/products_sentiment_urdu.csv.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0f3bd0857e6e3ea23ebe7d90468c5475dd14ad8f86a0e5d107f27dd1593135d
3
+ size 320436
sentiment/roman.csv.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74758d824b81f6e616387732b8151a378465ac67a411b98a6efe2e300b7257dc
3
+ size 626250
sentiment/urdu.tsv.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:199c4fe30c9be9eb50b16726fc40e9eac5cda3b42355226c4ee7d049f9853bf7
3
+ size 50416
spacy/ur_model-0.0.0.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3fe9a0d5ce61c5de4b4a2e20788a84c464ee12b29a9e2b1587a01d7ce2000008
3
+ size 71093666
spacy/ur_ner-0.0.0.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5bc1779776c625378b6755f218c5f26c44060c05a0622ea7e10ca728606b759d
3
+ size 11107654
summary/urdu_summary.tar.bz2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c15fc605d523c7987422941227e73b6ea20edd95d143bc5dcd4581bac7898f20
3
+ size 69278142