File size: 2,690 Bytes
abcd7c5 a6b7772 abcd7c5 a6b7772 abcd7c5 a6b7772 abcd7c5 a6b7772 abcd7c5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
---
license: apache-2.0
language:
- en
library_name: transformers
---
# Model Card: bart_fine_tuned_model
<!-- Provide a quick summary of what the model is/does. -->
## Model Name
## generate_summaries
### Model Description
<!-- This model represents a fine-tuned version of the facebook/bart-large model, specifically adapted for the task of Resume Summarization. The model has been trained to efficiently generate concise and relevant summaries from extensive resume texts. The fine-tuning process has tailored the original BART model to specialize in summarization tasks based on a specific dataset.. -->
This model represents a fine-tuned version of the facebook/bart-large model, specifically adapted for the task of Resume Summarization. The model has been trained to efficiently generate concise and relevant summaries from extensive resume texts. The fine-tuning process has tailored the original BART model to specialize in summarization tasks based on a specific dataset.
### Model information
-**Base Model: GebeyaTalent/generate_summaries**
-**Finetuning Dataset: To be made available in the future.**
### Training Parameters
- **Evaluation Strategy: epoch:**
- **Learning Rate: 5e-5**
- **Per Device Train Batch Size: 8:**
- **Per Device Eval Batch Size: 8**
- **Weight Decay: 0.01**
- **Save Total Limit: 5**
- **Number of Training Epochs: 10**
- **Predict with Generate: True**
- **Gradient Accumulation Steps: 1**
- **Optimizer: paged_adamw_32bit**
- **Learning Rate Scheduler Type: cosine**
## how to use
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
**1.** Install the transformers library:
**pip install transformers**
**2.** Import the necessary modules:
import torch
from transformers import BartTokenizer, BartForConditionalGeneration
**3.** Initialize the model and tokenizer:
model_name = 'GebeyaTalent/generate_summaries'
tokenizer = BartTokenizer.from_pretrained(model_name)
model = BartForConditionalGeneration.from_pretrained(model_name)
**4.** Prepare the text for summarization:
text = 'Your resume text here'
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding="max_length")
**5.** Generate the summary:
min_length_threshold = 55
summary_ids = model.generate(inputs["input_ids"], num_beams=4, min_length=min_length_threshold, max_length=150, early_stopping=True)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
**6.** Output the summary:
print("Summary:", summary)
## Model Card Authors
Dereje Hinsermu
## Model Card Contact
|