PriyaPatel's picture
Update README.md (#1)
4f4c9b5 verified
metadata
tags:
  - generated_from_keras_callback
model-index:
  - name: bias_identificaiton45
    results: []
datasets:
  - PriyaPatel/Bias_identification
metrics:
  - accuracy
base_model: cardiffnlp/twitter-roberta-base-sentiment-latest
pipeline_tag: text-classification

Model description

This model is a fine-tuned version of the cardiffnlp/twitter-roberta-base-sentiment-latest on a custom dataset for bias identification in large language models. It is trained to classify input text into one of 10 bias categories.

Intended uses & limitations

Intended Uses:

  • Bias Detection: Identifying and categorizing bias types in sentences or text fragments.
  • Research: Analyzing and understanding biases in natural language processing models.

Limitations:

  • Domain Specificity: The model's performance is optimized for detecting biases within the domains represented in the training data.
  • Not for General Sentiment Analysis: This model is not designed for general sentiment analysis or other NLP tasks.

Dataset Used for Training

This dataset was compiled to analyze various types of stereotypical biases present in language models. It incorporates data from multiple publicly available datasets, each contributing to the identification of specific bias types.

Link of the dataset: PriyaPatel/Bias_identification

The biases are labeled as follows:

  1. Race/Color - 0
  2. Socioeconomic Status - 1
  3. Gender - 2
  4. Disability - 3
  5. Nationality - 4
  6. Sexual Orientation - 5
  7. Physical Appearance - 6
  8. Religion - 7
  9. Age - 8
  10. Profession - 9

Training procedure

  • Base Model: cardiffnlp/twitter-roberta-base-sentiment-latest
  • Optimizer: Adam with a learning rate of 0.00001
  • Loss Function: Sparse Categorical Crossentropy
  • Batch Size: 20
  • Epochs: 3

Training hyperparameters

  • Learning Rate: 0.00001
  • Optimizer: Adam
  • Loss Function: Sparse Categorical Crossentropy
  • Batch Size: 20
  • Epochs: 3

Training Results

  • Validation Loss: 0.0744
  • Validation Accuracy: 0.9825
  • Test Loss: 0.0715
  • Test Accuracy: 0.9832

How to Load the Model

You can load the model using the Hugging Face transformers library as follows:

# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("PriyaPatel/bias_identificaiton45")
model = AutoModelForSequenceClassification.from_pretrained("PriyaPatel/bias_identificaiton45") 

# Example usage
inputs = tokenizer("Your text here", return_tensors="tf")
outputs = model(**inputs)