metadata
language: en
license: mit
base_model: google-bert/bert-base-uncased
tags:
- text-classification
- bert-base-uncased
datasets:
- disham993/ElectricalDeviceFeedbackBalanced
metrics:
- epoch: 1
- eval_f1: 0.8428607475458701
- eval_accuracy: 0.8557692307692307
- eval_runtime: 0.8633
- eval_samples_per_second: 1566.146
- eval_steps_per_second: 25.485
disham993/electrical-classification-bert-base-uncased
Model description
This model is fine-tuned from google-bert/bert-base-uncased for text-classification tasks.
Training Data
The model was trained on the disham993/ElectricalDeviceFeedbackBalanced dataset.
Model Details
- Base Model: google-bert/bert-base-uncased
- Task: text-classification
- Language: en
- Dataset: disham993/ElectricalDeviceFeedbackBalanced
Training procedure
Training hyperparameters
[Please add your training hyperparameters here]
Evaluation results
Metrics\n- epoch: 1.0\n- eval_f1: 0.8428607475458701\n- eval_accuracy: 0.8557692307692307\n- eval_runtime: 0.8633\n- eval_samples_per_second: 1566.146\n- eval_steps_per_second: 25.485
Usage
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("disham993/electrical-classification-bert-base-uncased")
model = AutoModel.from_pretrained("disham993/electrical-classification-bert-base-uncased")
Limitations and bias
[Add any known limitations or biases of the model]
Training Infrastructure
[Add details about training infrastructure used]
Last update
2025-01-05