File size: 1,737 Bytes
6cadbf6
3a7009e
 
 
 
 
 
 
6cadbf6
7a3435e
 
 
 
 
 
f6b8fb5
219b345
068198d
 
9f8b680
5070b7c
9f8b680
12adc03
9f8b680
e056e8f
980bd42
e056e8f
7e42a40
e056e8f
30058ff
 
2f449dd
219b345
98b7467
9f8b680
7a3435e
903dce3
 
41e641e
18d0784
903dce3
7a3435e
3a7009e
 
 
 
 
 
 
 
 
 
 
 
 
 
a8b3b44
3a7009e
 
 
 
 
 
 
 
 
 
6cadbf6
 
 
3a7009e
 
 
 
23aa8cb
da789b0
23aa8cb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: Nato-chat
  results: []

language: 
- en

widget:
- text: "What is the Full form of NATO?"
  example_title: "Full Form"
- text: "Name the NATO member countries."
  example_title: 'NATO Members'
- text: "What kind of support did Ukraine offer to NATO?"
  example_title: 'Example 1'
- text: 'Which country withdrew from the integrated military command of NATO in 1966?'
  example_title: 'Example 2'
- text: 'Who were the original members of NATO'
  example_title: 'OG Members'
- text: 'When was NATO established?'
  example_title: 'Example 3'
- text: 'How many NATO members are there currently?'
  example_title: 'Example 4'
- text: 'Who are the representatives of NATO member countries?'
  example_title: 'Example 5'
- text: 'Question: What is the aim of the Mediterranean Dialogue?'
  example_title: 'Example 6'



  

inference:
  parameters:
    max_length: 600
  

---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# Nato-chat

This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1764
- Rouge1: 0.6435
- Rouge2: 0.5596
- Rougel: 0.6287
- Rougelsum: 0.6312

## Model description
Flan-t5 Base

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear

### Framework versions

- Transformers 4.35.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0