yqnis commited on
Commit
31c581d
·
verified ·
1 Parent(s): 6816aa8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -13
README.md CHANGED
@@ -1,22 +1,82 @@
1
  ---
2
- base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
3
  tags:
4
- - text-generation-inference
5
- - transformers
6
  - unsloth
7
- - mistral
8
  - trl
9
- license: apache-2.0
10
- language:
11
- - en
 
 
12
  ---
13
 
14
- # Uploaded model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
- - **Developed by:** yqnis
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
19
 
20
- This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
1
  ---
2
+ library_name: transformers
3
  tags:
 
 
4
  - unsloth
 
5
  - trl
6
+ - sft
7
+ - med
8
+ - mistral
9
+ - quaero
10
+ - lora
11
  ---
12
 
13
+ # LLaMA 3 8B fine-tuned on Quaero for Named Entity Recognition (Generative)
14
+
15
+ This is a **LoRA adapter** version of [unsloth/mistral-7b-instruct-v0.3](https://huggingface.co/unsloth/mistral-7b-instruct-v0.3), fine-tuned on the [Quaero French medical dataset](https://quaerofrenchmed.limsi.fr/) using a generative approach to Named Entity Recognition (NER).
16
+
17
+ ## Task
18
+
19
+ The model was trained to extract entities from French biomedical sentences (medlines) using a structured, prompt-based format.
20
+
21
+ | Tag | Description |
22
+ | ------ | ----------------------------------------------------------- |
23
+ | `DISO` | **Diseases** or health-related conditions |
24
+ | `ANAT` | **Anatomical parts** (organs, tissues, body regions, etc.) |
25
+ | `PROC` | **Medical or surgical procedures** |
26
+ | `DEVI` | **Medical devices or instruments** |
27
+ | `CHEM` | **Chemical substances or medications** |
28
+ | `LIVB` | **Living beings** (e.g. humans, animals, bacteria, viruses) |
29
+ | `GEOG` | **Geographical locations** (e.g. countries, regions) |
30
+ | `OBJC` | **Physical objects** not covered by other categories |
31
+ | `PHEN` | **Biological processes** (e.g. inflammation, mutation) |
32
+ | `PHYS` | **Physiological functions** (e.g. respiration, vision) |
33
+
34
+ I use `<>` as a separator and the output format is :
35
+
36
+ ```
37
+ TAG_1 entity_1 <> TAG_2 entity_2 <> ... <> TAG_n entity_n
38
+ ```
39
+
40
+ ## Dataset
41
+
42
+ The original dataset is Quaero French Medical Corpus and I converted it to a JSON format for generative instruction-style training.
43
+
44
+
45
+ ```json
46
+ {
47
+ "input": "Etude de l'efficacité et de la tolérance de la prazosine à libération prolongée chez des patients hypertendus et diabétiques non insulinodépendants.",
48
+ "output": "DISO tolérance <> CHEM prazosine <> LIVB patients <> DISO hypertendus <> DISO diabétiques non insulinodépendants"
49
+ }
50
+ ```
51
+
52
+ The QUAERO French Medical corpus features **overlapping entity spans**, including nested structures, for instance :
53
+ ```json
54
+ {
55
+ "input": "Cancer du pancréas",
56
+ "output": "DISO Cancer <> DISO Cancer du pancréas <> ANAT pancréas"
57
+ }
58
+ ```
59
+
60
+ ## Evaluation
61
+
62
+ Evaluation was performed on the test split by comparing the predicted entity set against the ground truth annotations using exact (type, entity) matching.
63
+
64
+ | Metric | Score |
65
+ | --------- | ------ |
66
+ | Precision | 0.6883 |
67
+ | Recall | 0.7143 |
68
+ | F1 Score | 0.7011 |
69
+
70
+
71
+ ## Other formats
72
+
73
+ This model is also available in the following formats:
74
+
75
+ - **16bit**
76
+ → [yqnis/mistral-7b-quaero](https://huggingface.co/yqnis/mistral-7b-quaero)
77
 
78
+ - **GGUF Q8_0**
79
+ [yqnis/mistral-7b-quaero-gguf](https://huggingface.co/yqnis/llama3-8b-quaero-yqnis/mistral-7b-quaero-gguf)
 
80
 
 
81
 
82
+ This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.