Datasets:
metadata
pretty_name: Telugu Colloquial Corpus (Tokenized)
dataset_summary: >-
A tokenized version of the Telugu Colloquial Corpus, containing examples of
informal Telugu language use.
language:
- te
task_categories:
- text-generation
- masked-language-modeling
- sentiment-classification
tags:
- telugu
- colloquial
- nlp
- tokenization
license: cc-by-sa-4.0
size_categories:
- 1K<n<10K
Telugu Colloquial Corpus (Tokenized)
This dataset is a tokenized version of the Telugu Colloquial Corpus (TeCC). It contains examples of informal, everyday Telugu language, including slang, regional variations, and conversational patterns.
Dataset Details
- Language: Telugu (te)
- Tokenization: Tokenized using the
bert-base-multilingual-cased
tokenizer from thetransformers
library. - Source: [Describe where the original data came from – e.g., collected from online forums, friends, family, manually transcribed conversations. Be as specific as possible.]
- Purpose: This dataset is intended for use in NLP tasks such as:
- Training language models for Telugu to generate text.
- Fine-tuning pre-trained models for Telugu to improve understanding of informal language.
- Developing chatbots and other conversational AI applications for Telugu.
- Research on Telugu colloquial language.
Data Fields
The dataset contains the following features:
input_ids
: The input token IDs (integers representing tokens in the tokenizer's vocabulary).attention_mask
: The attention mask (1 = attend to, 0 = ignore padding).token_type_ids
: Identifies which sequence a token belongs to (all 0s in this dataset, as it's a single-sequence task).
Data Collection and Preprocessing
The original data was collected from [Describe sources and methods used to collect the data]. It was then preprocessed as follows:
- Tokenized using the
bert-base-multilingual-cased
tokenizer with these settings:padding
: longesttruncation
: True
- The following columns from the original JSON data were removed:
Colloquial_Telugu
Standard_Telugu
English
Source
Notes
Type
Answer
Meaning
Author
[Optional: Describe any other cleaning, normalization, or data augmentation steps you performed.]
Ethical Considerations
[Discuss any ethical considerations related to the data, such as privacy, bias, cultural sensitivity, or potential misuse. Be transparent about any potential limitations of the dataset.]
Limitations
- The dataset is relatively small (304 examples), which may limit the performance of models trained on it.
- The dataset may contain biases present in the original data sources.
- The coverage of different Telugu dialects and social groups may be uneven.
- The
bert-base-multilingual-cased
tokenizer may not be ideal for Telugu. A tokenizer trained specifically on Telugu text could improve results.
Citation
Please cite this dataset as follows:
#contact Ankitha Chowdary [email protected]