ArEn-TweetSentiment-BERT-Hatem

ArEn-TweetSentiment-BERT-Hatem is a bilingual sentiment analysis model trained on both Arabic and English tweets. It is based on the bert-base-multilingual-cased model from Hugging Face Transformers.

The model distinguishes between positive and negative sentiments in real-world social media content, specifically Twitter data.

๐Ÿง  Model Details

  • Base model: bert-base-multilingual-cased
  • Fine-tuned on:
  • Task: Binary sentiment classification (0 = Negative, 1 = Positive)
  • Languages: Arabic, English
  • Tokenizer: bert-base-multilingual-cased tokenizer
  • Accuracy: Evaluated on 10% holdout from training set

๐Ÿ” Training Details

  • Framework: ๐Ÿค— Transformers + PyTorch
  • Training Time: ~2 epochs
  • Optimizer: AdamW (default in Trainer)
  • Batch Size: 16
  • Evaluation Metric: Accuracy, F1, Precision, Recall
  • Environment: Google Colab

๐Ÿ“Š Evaluation Results

โœ… Experiment 1 โ€” Initial Run (2K Samples)

Epoch Train Loss Val Loss Accuracy F1 Score Precision Recall
1 0.6266 0.7536 59.00% 0.1800 0.6429 0.1047
2 0.5127 0.5944 72.00% 0.6667 0.6829 0.6512

โœ… Experiment 2 โ€” Refined Arabic Dataset (20K Samples)

Epoch Train Loss Val Loss Accuracy F1 Score Precision Recall
1 0.5851 0.5879 70.85% 0.6674 0.6139 0.7312
2 0.4792 0.5007 78.65% 0.7105 0.7763 0.6550

โœ… Experiment 3 โ€” Large-Scale Ar+En Dataset (100K Samples)

Epoch Train Loss Val Loss Accuracy F1 Score Precision Recall
1 0.5231 0.5846 72.35% 0.7127 0.6171 0.8434
2 0.4404 0.4496 79.98% 0.7502 0.7615 0.7394

๐Ÿ” Summary: Larger datasets led to higher recall and more robust generalization across languages. The model surpassed 79% accuracy and 0.75 F1 score in the final training run.


๐Ÿงช How to Reproduce

The model was fine-tuned using Trainer from the Hugging Face transformers library on a multilingual sentiment dataset (based on Sentiment140 and additional Arabic tweets).

Training Time: ~1h30min on Colab GPU Model: bert-base-multilingual-cased

๐Ÿ“ฆ How to Use

from transformers import pipeline

classifier = pipeline("sentiment-analysis", model="HatemMoushir/ArEn-TweetSentiment-BERT-Hatem")
print(classifier("ุงู„ุฎุฏู…ุฉ ูƒุงู†ุช ู…ู…ุชุงุฒุฉ"))
print(classifier("I hate this product."))

Testing

  • example 1 (Arabic)

from transformers import pipeline

# ุชุญู…ูŠู„ ุงู„ู†ู…ูˆุฐุฌ
classifier = pipeline("sentiment-analysis", model="HatemMoushir/ArEn-TweetSentiment-BERT-Hatem")

# 100 ุฌู…ู„ุฉ ู…ุน ุงู„ุชุตู†ูŠูุงุช ุงู„ุญู‚ูŠู‚ูŠุฉ (1 = ุฅูŠุฌุงุจูŠุŒ 0 = ุณู„ุจูŠ)
samples = [
    ("ุฃู†ุง ุณุนูŠุฏ ุฌุฏู‹ุง ุงู„ูŠูˆู…", 1),
    ("ุงู„ุฌูˆ ู…ู…ุทุฑ ูˆู‡ุฐุง ูŠุฌุนู„ู†ูŠ ุญุฒูŠู†ู‹ุง", 0),
    ("ู†ุฌุญุช ููŠ ุงู„ุงู…ุชุญุงู†!", 1),
    ("ุฃุดุนุฑ ุจุงู„ุฅุญุจุงุท ู…ู† ุงู„ุฃุฎุจุงุฑ", 0),
    ("ุฃุญุจ ุฃุตุฏู‚ุงุฆูŠ ูƒุซูŠุฑู‹ุง", 1),
    ("ู‡ุฐุง ุฃุณูˆุฃ ูŠูˆู… ููŠ ุญูŠุงุชูŠ", 0),
    ("ุฃุดุนุฑ ุจุงู„ุฑุงุญุฉ ูˆุงู„ุทู…ุฃู†ูŠู†ุฉ", 1),
    ("ู„ู… ุฃุชู…ูƒู† ู…ู† ุงู„ู†ูˆู… ุฌูŠุฏู‹ุง ุงู„ู„ูŠู„ุฉ", 0),
    ("ุงู„ูŠูˆู… ุฌู…ูŠู„ ูˆู…ุดู…ุณ", 1),
    ("ูƒู„ ุดูŠุก ูŠุณูŠุฑ ุจุดูƒู„ ุฎุงุทุฆ", 0),
    ("ุฃุญุจ ู…ุดุงู‡ุฏุฉ ุงู„ุฃูู„ุงู… ู…ุน ุนุงุฆู„ุชูŠ", 1),
    ("ุชุฃุฎุฑุช ุนู† ุงู„ุนู…ู„ ูˆูู‚ุฏุช ู…ุฒุงุฌูŠ", 0),
    ("ุฃุดุนุฑ ุจุงู„ู†ุดุงุท ูˆุงู„ุญูŠูˆูŠุฉ", 1),
    ("ุงู„ู…ูƒุงู† ู…ุฒุฏุญู… ูˆู„ุง ุฃุณุชุทูŠุน ุงู„ุชุญู…ู„", 0),
    ("ู‚ุถูŠุช ุนุทู„ุฉ ุฑุงุฆุนุฉ ุนู„ู‰ ุงู„ุดุงุทุฆ", 1),
    ("ุงู†ุชู‡ู‰ ุงู„ูŠูˆู… ุจุดูƒู„ ุณูŠุก", 0),
    ("ุฃุดุนุฑ ุจุงู„ุชูุงุคู„ ุจุดุฃู† ุงู„ู…ุณุชู‚ุจู„", 1),
    ("ู„ู… ูŠุนุฌุจู†ูŠ ุงู„ุทุนุงู… ุงู„ูŠูˆู…", 0),
    ("ุฃุดุนุฑ ุจุงู„ุญุจ ู…ู† ุงู„ุฌู…ูŠุน", 1),
    ("ุฎุณุฑุช ูƒู„ ุดูŠุก ููŠ ู„ุญุธุฉ", 0),
    ("ุงู„ู…ูˆุณูŠู‚ู‰ ุชุฌุนู„ู†ูŠ ุณุนูŠุฏู‹ุง", 1),
    ("ุงู„ุทุฑูŠู‚ ู…ุฒุฏุญู… ูˆุฃู†ุง ุบุงุถุจ", 0),
    ("ุฃู†ุง ู…ู…ุชู† ู„ูƒู„ ุดูŠุก ู„ุฏูŠ", 1),
    ("ูƒุงู† ูŠูˆู…ู‹ุง ู…ุฑู‡ู‚ู‹ุง ุฌุฏู‹ุง", 0),
    ("ุฃุดุนุฑ ุจุงู„ุฃู…ู„ ุฑุบู… ุงู„ุตุนูˆุจุงุช", 1),
    ("ู„ุง ุฃุทูŠู‚ ุงู„ุงู†ุชุธุงุฑ ู„ุฒูŠุงุฑุฉ ุฃุตุฏู‚ุงุฆูŠ", 1),
    ("ุชุฌุงู‡ู„ู†ูŠ ููŠ ุงู„ุงุฌุชู…ุงุน ูˆุดุนุฑุช ุจุงู„ุฅู‡ุงู†ุฉ", 0),
    ("ูุฒุช ููŠ ุงู„ู…ุณุงุจู‚ุฉ!", 1),
    ("ุงู„ุฌูˆ ุฎุงู†ู‚ ูˆู„ุง ูŠูุญุชู…ู„", 0),
    ("ุชู„ู‚ูŠุช ุฑุณุงู„ุฉ ุฌู…ูŠู„ุฉ ู…ู† ุตุฏูŠู‚ูŠ", 1),
    ("ุงู†ู‚ุทุนุช ุงู„ูƒู‡ุฑุจุงุก ูˆูุงุชู†ูŠ ุงู„ููŠู„ู…", 0),
    ("ุฃู†ุง ู…ุญุธูˆุธ ุจุนุงุฆู„ุชูŠ", 1),
    ("ู„ุง ุฃุญุฏ ูŠู‡ุชู… ุจูŠ", 0),
    ("ุงู„ู‡ุฏูˆุก ููŠ ู‡ุฐุง ุงู„ู…ูƒุงู† ูŠุฑูŠุญู†ูŠ", 1),
    ("ุฎุณุฑุช ูุฑุตุชูŠ ุงู„ุฃุฎูŠุฑุฉ", 0),
    ("ุฃุดุนุฑ ุฃู†ู†ูŠ ู…ุญุจูˆุจ", 1),
    ("ุถุงุนุช ุฃู…ุชุนุชูŠ ููŠ ุงู„ู…ุทุงุฑ", 0),
    ("ู‚ู…ุช ุจุนู…ู„ ุฌูŠุฏ ุงู„ูŠูˆู…", 1),
    ("ู„ุง ุฃุฑูŠุฏ ุงู„ุชุญุฏุซ ู…ุน ุฃุญุฏ", 0),
    ("ุฃู†ุง ู…ู…ุชู† ู„ู„ุญูŠุงุฉ", 1),
    ("ูŠูˆู… ู…ู…ู„ ูˆุจู„ุง ูุงุฆุฏุฉ", 0),
    ("ุชู„ู‚ูŠุช ุชุฑู‚ูŠุฉ ููŠ ุงู„ุนู…ู„", 1),
    ("ุฃุดุนุฑ ุจุงู„ุฅุฌู‡ุงุฏ ูˆุงู„ุชุนุจ", 0),
    ("ุงู„ู‡ุฏูŠุฉ ุฃุณุนุฏุชู†ูŠ ูƒุซูŠุฑู‹ุง", 1),
    ("ุงู†ู‡ุฑุช ู…ู† ุงู„ุถุบุท", 0),
    ("ุชู†ุงูˆู„ุช ูˆุฌุจุฉ ู„ุฐูŠุฐุฉ", 1),
    ("ุชุฃุฎุฑุช ุงู„ุฑุญู„ุฉ ูˆุฃุดุนุฑ ุจุงู„ุถูŠู‚", 0),
    ("ุญู‚ู‚ุช ู‡ุฏูู‹ุง ูƒู†ุช ุฃุณุนู‰ ู„ู‡", 1),
    ("ุงู„ุฎุณุงุฑุฉ ูƒุงู†ุช ู‚ุงุณูŠุฉ", 0),
    ("ุฃู†ุง ูุฎูˆุฑ ุจู†ูุณูŠ", 1),
    ("ูู‚ุฏุช ุงู„ุซู‚ุฉ ููŠ ู…ู† ุญูˆู„ูŠ", 0),
    ("ุนุทู„ุฉ ู†ู‡ุงูŠุฉ ุงู„ุฃุณุจูˆุน ูƒุงู†ุช ุฑุงุฆุนุฉ", 1),
    ("ู„ุง ุฃุฌุฏ ุฃูŠ ุฏุงูุน ู„ู„ุงุณุชู…ุฑุงุฑ", 0),
    ("ุงุจู†ูŠ ู†ุฌุญ ููŠ ุฏุฑุงุณุชู‡", 1),
    ("ูƒู„ ู…ู† ุญูˆู„ูŠ ุฎุฐู„ู†ูŠ", 0),
    ("ู…ุดูŠุช ุนู„ู‰ ุงู„ุจุญุฑ ูˆูƒุงู† ุงู„ุฌูˆ ุฌู…ูŠู„ู‹ุง", 1),
    ("ุชุนุฑุถุช ู„ู…ูˆู‚ู ู…ุญุฑุฌ ุฃู…ุงู… ุงู„ุฌู…ูŠุน", 0),
    ("ุฃุดุนุฑ ุจุงู„ุณุนุงุฏุฉ ู„ุฃู†ูŠ ุณุงุนุฏุช ุดุฎุตู‹ุง", 1),
    ("ุชู… ุชุฌุงู‡ู„ูŠ ุจุงู„ูƒุงู…ู„", 0),
    ("ู†ู…ุช ุฌูŠุฏู‹ุง ูˆุงุณุชูŠู‚ุธุช ุจู†ุดุงุท", 1),
    ("ู„ุง ุฃุดุนุฑ ุจุฃูŠ ุชู‚ุฏู…", 0),
    ("ูŠูˆู… ุฑุงุฆุน ู…ุน ุฃุตุฏู‚ุงุฆูŠ", 1),
    ("ูุดู„ุช ู…ุฑุฉ ุฃุฎุฑู‰", 0),
    ("ุชู„ู‚ูŠุช ู…ูƒุงู„ู…ุฉ ุฃุณุนุฏุชู†ูŠ", 1),
    ("ูƒู„ ุดูŠุก ูŠู†ู‡ุงุฑ ู…ู† ุญูˆู„ูŠ", 0),
    ("ุงุณุชู…ุชุนุช ุจุงู„ุฃุฌูˆุงุก ุงู„ูŠูˆู…", 1),
    ("ุฃุดุนุฑ ุจุงู„ู‚ู„ู‚ ุงู„ู…ุณุชู…ุฑ", 0),
    ("ูƒุงู† ุงู„ู„ู‚ุงุก ุฏุงูุฆู‹ุง ูˆู…ู„ูŠุฆู‹ุง ุจุงู„ุญุจ", 1),
    ("ู„ุง ุฃุชุญู…ู„ ุงู„ุถุบุท ุฃูƒุซุฑ", 0),
    ("ู†ุฌุญ ู…ุดุฑูˆุนูŠ ุฃุฎูŠุฑู‹ุง", 1),
    ("ูู‚ุฏุช ุนู…ู„ูŠ ุงู„ูŠูˆู…", 0),
    ("ู‚ุถูŠุช ูˆู‚ุชู‹ุง ู…ู…ุชุนู‹ุง ููŠ ุงู„ุญุฏูŠู‚ุฉ", 1),
    ("ุฃู†ุง ุฎุงุฆู ู…ู…ุง ุณูŠุฃุชูŠ", 0),
    ("ุชู„ู‚ูŠุช ุฏุนู…ู‹ุง ูƒุจูŠุฑู‹ุง ู…ู† ุฃุตุฏู‚ุงุฆูŠ", 1),
    ("ุงู„ูŠุฃุณ ูŠุณูŠุทุฑ ุนู„ูŠ", 0),
    ("ุฑุญู„ุชูŠ ูƒุงู†ุช ู…ู„ูŠุฆุฉ ุจุงู„ูุฑุญ", 1),
    ("ู„ุง ุดูŠุก ูŠุณุนุฏู†ูŠ ู…ุคุฎุฑู‹ุง", 0),
    ("ุฃุญุจุจุช ุงู„ููŠู„ู… ูƒุซูŠุฑู‹ุง", 1),
    ("ูƒู„ู…ุงุชู‡ู… ุฌุฑุญุชู†ูŠ", 0),
    ("ุชุฐูˆู‚ุช ุทุนุงู…ู‹ุง ุฑุงุฆุนู‹ุง", 1),
    ("ู„ุง ุฃุฑู‰ ูุงุฆุฏุฉ ู…ู† ุงู„ู…ุญุงูˆู„ุฉ", 0),
    ("ุถุญูƒู†ุง ูƒุซูŠุฑู‹ุง ุงู„ูŠูˆู…", 1),
    ("ุญู„ู…ูŠ ุชุจุฎุฑ", 0),
    ("ู„ุญุธุฉ ุงู„ู„ู‚ุงุก ูƒุงู†ุช ุณุงุญุฑุฉ", 1),
    ("ุฎุณุฑุช ุฃู‚ุฑุจ ุงู„ู†ุงุณ ุฅู„ูŠ", 0),
    ("ุงู„ู…ุดูŠ ููŠ ุงู„ุทุจูŠุนุฉ ูŠุฑูŠุญ ุฃุนุตุงุจูŠ", 1),
    ("ู„ู… ูŠุตุฏู‚ู†ูŠ ุฃุญุฏ", 0),
    ("ุงุจุชุณุงู…ุฉ ุทูู„ ุฌุนู„ุช ูŠูˆู…ูŠ ุฃูุถู„", 1),
    ("ูƒู„ ุดูŠุก ุฃุตุจุญ ุตุนุจู‹ุง", 0),
    ("ุงู„ูŠูˆู… ุงุญุชูู„ุช ุจู†ุฌุงุญูŠ", 1),
    ("ุงู†ู‡ุงุฑ ูƒู„ ุดูŠุก ููŠ ู„ุญุธุฉ", 0),
    ("ุฃู…ุถูŠุช ูˆู‚ุชู‹ุง ู…ู…ุชุนู‹ุง ู…ุน ุงู„ุนุงุฆู„ุฉ", 1),
    ("ูู‚ุฏุช ุงู„ุฃู…ู„ ุชู…ุงู…ู‹ุง", 0),
    ("ู‚ุถูŠุช ูŠูˆู…ู‹ุง ุฑุงุฆุนู‹ุง ููŠ ุงู„ุฑูŠู", 1),
    ("ุงู„ู†ุงุณ ู„ุง ูŠูู‡ู…ูˆู†ู†ูŠ", 0),
    ("ุงุณุชู…ุชุนุช ุจุงู„ู…ูˆุณูŠู‚ู‰ ูˆุงู„ู‡ุฏูˆุก", 1),
    ("ู„ุง ุฃุดุนุฑ ุจุงู„ุณุนุงุฏุฉ ุฃุจุฏู‹ุง", 0),
    ("ุงู„ุฃุตุฏู‚ุงุก ุฌู„ุจูˆุง ู„ูŠ ุงู„ุณุนุงุฏุฉ", 1),
    ("ุชุนุจุช ู…ู† ุงู„ู…ุญุงูˆู„ุฉ", 0),
    ("ูƒู„ ู„ุญุธุฉ ูƒุงู†ุช ุฑุงุฆุนุฉ", 1),
    ("ูƒู„ ุดูŠุก ูุดู„", 0),
    ("ุงู„ู†ุฌุงุญ ูƒุงู† ุซู…ุฑุฉ ุฌู‡ุฏูŠ", 1),
    ("ู„ุง ุฃู…ู„ูƒ ุดูŠุฆู‹ุง ุฃูุฑุญ ุจู‡", 0)
]

# ุชุฌุฑุจุฉ ุงู„ู†ู…ูˆุฐุฌ ูˆู…ู‚ุงุฑู†ุฉ ุงู„ู†ุชูŠุฌุฉ
correct = 0

for i, (text, true_label) in enumerate(samples):
    result = classifier(text)[0]
    
    predicted_label = 1 if result["label"] == ("LABEL_1") else 0
    is_correct = predicted_label == true_label
    correct += is_correct

    print(f"{i+1}. \"{text}\"")
    print(f"   ๐Ÿ” Model โ†’ {predicted_label} | ๐ŸŽฏ True โ†’ {true_label} | {'โœ”๏ธ ุตุญ' if is_correct else 'โŒ ุบู„ุท'}\n")

# ุญุณุงุจ ุงู„ุฏู‚ุฉ
accuracy = correct / len(samples)
print(f"โœ… Accuracy: {accuracy * 100:.2f}%")
  • example 2 (English)

from transformers import pipeline

# ุชุญู…ูŠู„ ุงู„ู†ู…ูˆุฐุฌ ุงู„ุฅู†ุฌู„ูŠุฒูŠ ุงู„ู…ุฏุฑุจ ุนู„ู‰ Sentiment140

classifier = pipeline("sentiment-analysis", model="HatemMoushir/ArEn-TweetSentiment-BERT-Hatem")

# 100 ุฌู…ู„ุฉ ุฅู†ุฌู„ูŠุฒูŠุฉ ู…ุน ุงู„ุชุตู†ูŠู ุงู„ุญู‚ูŠู‚ูŠ: 1 = Positive, 0 = Negative
samples = [
    ("I love this place!", 1),
    ("I hate waiting in traffic.", 0),
    ("Today is a beautiful day", 1),
    ("I am really disappointed", 0),
    ("Feeling great about this opportunity", 1),
    ("This movie was terrible", 0),
    ("Absolutely loved the dinner", 1),
    ("Iโ€™m sad and frustrated", 0),
    ("My friends make me happy", 1),
    ("Everything went wrong today", 0),
    ("What a fantastic game!", 1),
    ("Worst experience ever", 0),
    ("The weather is amazing", 1),
    ("I canโ€™t stand this anymore", 0),
    ("So proud of my achievements", 1),
    ("Feeling down", 0),
    ("Just got a promotion!", 1),
    ("Why does everything suck?", 0),
    ("Best vacation ever", 1),
    ("Iโ€™m tired of this nonsense", 0),
    ("Such a lovely gesture", 1),
    ("That was rude and uncalled for", 0),
    ("Finally some good news!", 1),
    ("I'm so lonely", 0),
    ("My cat is the cutest", 1),
    ("This food tastes awful", 0),
    ("Celebrating small wins today", 1),
    ("Not in the mood", 0),
    ("Grateful for everything", 1),
    ("I feel useless", 0),
    ("Such a peaceful morning", 1),
    ("Another failure, just great", 0),
    ("Got accepted into college!", 1),
    ("I hate being ignored", 0),
    ("The sunset was breathtaking", 1),
    ("You ruined my day", 0),
    ("He makes me feel special", 1),
    ("Everything is falling apart", 0),
    ("Can't wait for the weekend", 1),
    ("So much stress right now", 0),
    ("Iโ€™m in love", 1),
    ("I donโ€™t care anymore", 0),
    ("Won first place!", 1),
    ("This is so frustrating", 0),
    ("He always cheers me up", 1),
    ("Feeling stuck", 0),
    ("Had a wonderful time", 1),
    ("Nothing matters", 0),
    ("Looking forward to tomorrow", 1),
    ("Just leave me alone", 0),
    ("We made it!", 1),
    ("Horrible customer service", 0),
    ("The music lifts my spirits", 1),
    ("I'm drowning in problems", 0),
    ("My team won the match", 1),
    ("I wish I never came", 0),
    ("Sunshine and good vibes", 1),
    ("Everything is a mess", 0),
    ("Love the energy here", 1),
    ("Feeling hopeless", 0),
    ("She always makes me smile", 1),
    ("So many regrets", 0),
    ("Today was a success", 1),
    ("Bad day again", 0),
    ("Iโ€™m truly blessed", 1),
    ("This is depressing", 0),
    ("Can't stop smiling", 1),
    ("Everything hurts", 0),
    ("So excited for this!", 1),
    ("I hate myself", 0),
    ("Best concert ever", 1),
    ("Life is unfair", 0),
    ("Happy and content", 1),
    ("Crying inside", 0),
    ("Feeling inspired", 1),
    ("The service was awful", 0),
    ("Joy all around", 1),
    ("I feel dead inside", 0),
    ("Itโ€™s a dream come true", 1),
    ("Nothing good ever happens", 0),
    ("Feeling positive", 1),
    ("That hurt my feelings", 0),
    ("Success tastes sweet", 1),
    ("I can't handle this", 0),
    ("We had a blast", 1),
    ("Itโ€™s not worth it", 0),
    ("Heโ€™s such a kind soul", 1),
    ("I'm broken", 0),
    ("Everything is perfect", 1),
    ("So tired of pretending", 0),
    ("What a nice surprise!", 1),
    ("I feel empty", 0),
    ("Canโ€™t wait to start!", 1),
    ("It's always my fault", 0),
    ("A new beginning", 1),
    ("So much pain", 0),
    ("My heart is full", 1),
    ("This sucks", 0),
    ("I feel accomplished", 1),
    ("Why bother", 0),
    ("Living my best life", 1),
    ("I just want to disappear", 0)
]

# ุชุฌุฑุจุฉ ุงู„ู†ู…ูˆุฐุฌ ูˆู…ู‚ุงุฑู†ุฉ ุงู„ู†ุชูŠุฌุฉ
correct = 0

for i, (text, true_label) in enumerate(samples):
    result = classifier(text)[0]
    predicted_label = 1 if  result["label"] == "LABEL_1" else 0
    is_correct = predicted_label == true_label
    correct += is_correct

    print(f"{i+1}. \"{text}\"")
    print(f"   ๐Ÿ” Model โ†’ {predicted_label} | ๐ŸŽฏ True โ†’ {true_label} | {'โœ”๏ธ Correct' if is_correct else 'โŒ Wrong'}\n")

# ุฏู‚ุฉ ุงู„ู†ู…ูˆุฐุฌ
accuracy = correct / len(samples)
print(f"โœ… Accuracy: {accuracy * 100:.2f}%")

Development and Assistance

This model was developed and trained using Google Colab, with guidance and technical assistance from ChatGPT, which was used for idea generation, code authoring, and troubleshooting throughout the development process.


Source Code

The full code used to prepare and train the model is available on GitHub:

๐Ÿ”— GitHub file source.


๐Ÿ“œ License

MIT License. Free to use, modify, and share with attribution.

๐Ÿ‘ค Author

Developed by Hatem Moushir Contact: [email protected]

Downloads last month
34
Safetensors
Model size
178M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Dataset used to train HatemMoushir/ArEn-TweetSentiment-BERT-Hatem