Update README.md
Browse files
README.md
CHANGED
@@ -12,9 +12,9 @@ tags:
|
|
12 |
|
13 |

|
14 |
|
15 |
-
#
|
16 |
|
17 |
-
This is
|
18 |
|
19 |
## Merge Details
|
20 |
### Merge Method
|
@@ -24,8 +24,8 @@ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge m
|
|
24 |
### Models Merged
|
25 |
|
26 |
The following models were included in the merge:
|
27 |
-
* [
|
28 |
-
* [
|
29 |
|
30 |
### Configuration
|
31 |
|
|
|
12 |
|
13 |

|
14 |
|
15 |
+
# SAXON 0 I
|
16 |
|
17 |
+
This is an iteration of the SAXON 0 model, I am attempting to minimise the amount of finetuning I need to do by using [mergekit](https://github.com/cg123/mergekit) to let SAXON 0 absorb models trained on Politics and specifically UK Laws etc, since I want the model to be good a those.
|
18 |
|
19 |
## Merge Details
|
20 |
### Merge Method
|
|
|
24 |
### Models Merged
|
25 |
|
26 |
The following models were included in the merge:
|
27 |
+
* [Llama 3.1 Politics (Dataset scraped from Reddit)](https://huggingface.co/brianmatzelle/llama3.1-8b-instruct-political-subreddits)
|
28 |
+
* [Deepkseek R1 Llama Distill UK Legislation Expert](https://huggingface.co/EryriLabs/DeepSeek-R1-Distill-Llama-UK-Legislation-8B)
|
29 |
|
30 |
### Configuration
|
31 |
|