model / README.md
miscovery's picture
Update README.md
7d37b88 verified
metadata
language: multilingual
license: mit
tags:
  - transformer
  - summarization
  - translation
  - question-answering
  - english
  - arabic
datasets:
  - miscovery/arabic_egypt_english_world_facts
pipeline_tag: summarization
library_name: transformers

Miscovery Transformer Model

This model is a transformer-based encoder-decoder model for multiple NLP tasks:

  • Text summarization
  • Translation (English-Arabic)
  • Question-answering

Model Architecture

  • Model type: miscovery
  • Number of parameters: 485674144
  • Encoder layers: 12
  • Decoder layers: 12
  • Attention heads: 12
  • Hidden size: 768
  • Feed-forward size: 3072

Training

The model was trained in two stages:

  1. Pre-training on sentence rearrangement tasks
  2. Fine-tuning on downstream tasks

Usage

  1. Install the package:
pip install miscovery-model
  1. Run the model using a script:
from miscovery_model import standard_pipeline

# Create a pipeline
model = standard_pipeline("miscovery/model")

# Use it
result = model("Translate this to Arabic: What year did World War I begin?")
print(result)

Limitations

This model was trained on specific datasets and may not generalize well to all domains.