chew
commited on
Commit
·
9e0d363
1
Parent(s):
0953aa7
Update README.md
Browse files
README.md
CHANGED
|
@@ -24,7 +24,7 @@ The "Seal" model is a novel language model built on top of Meta's LLAMA-2 archit
|
|
| 24 |
- Architecture: Meta's LLAMA-2
|
| 25 |
- Training Approach: Fine-tuning with the LORA framework, model weight merging, adapter-based adaptation
|
| 26 |
- Development Methodology: Open Platypus
|
| 27 |
-
- Contributors:
|
| 28 |
|
| 29 |
## Training Process
|
| 30 |
The "Seal" model was trained through a multi-stage process aimed at maximizing its performance and adaptability:
|
|
|
|
| 24 |
- Architecture: Meta's LLAMA-2
|
| 25 |
- Training Approach: Fine-tuning with the LORA framework, model weight merging, adapter-based adaptation
|
| 26 |
- Development Methodology: Open Platypus
|
| 27 |
+
- Contributors: Mrahc and Finch Research
|
| 28 |
|
| 29 |
## Training Process
|
| 30 |
The "Seal" model was trained through a multi-stage process aimed at maximizing its performance and adaptability:
|