kevinkawchak commited on
Commit
fb6c861
·
verified ·
1 Parent(s): 6ceaa62

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -0
README.md CHANGED
@@ -17,6 +17,25 @@ base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
17
  - **Developed by:** kevinkawchak
18
  - **License:** apache-2.0
19
  - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
22
 
 
17
  - **Developed by:** kevinkawchak
18
  - **License:** apache-2.0
19
  - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
20
+ - **Finetuned using dataset :** zjunlp/Mol-Instructions, cc-by-4.0
21
+ - **Dataset identification:** Molecule-oriented Instructions
22
+ - **Dataset function:** Description guided molecule design
23
+
24
+ The following are modifications or improvements to original notebooks. Please refer to the authors' models for the published primary work.
25
+ [Cover Image](https://drive.google.com/file/d/1J-spZMzLlPxkqfMrPxvtMZiD2_hfcGyr/view?usp=sharing) <br>
26
+
27
+ A 4-bit quantization of Meta-Llama-3-8B-Instruct was used to reduce training memory requirements when fine-tuning on the zjunlp/Mol-Instructions dataset. (1-2) In addition, the minimum LoRA rank value was utilized to reduce the overall size of created models. In specific, the molecule-oriented instructions description guided molecule design was implemented to answer general questions and general biochemistry questions. General questions were answered with high accuracy, while biochemistry related questions returned 'SELFIES' structures but with limited accuracy.
28
+
29
+ The notebook featured Torch and Hugging Face libraries using the Unsloth llama-3-8b-Instruct-bnb-4bit quantization model. Training loss decreased steadily from 1.97 to 0.73 over 60 steps. Additional testing regarding the appropriate level of compression or hyperparameter adjustments for accurate SELFIES chemical structures outputs is relevant, as shown in the GitHub notebook for research purposes (3). A 16-bit and reduced 4-bit size were uploaded to Hugging Face. (4-5)
30
+
31
+ Update 04/24: The number of training steps were increased to further decrease loss, while maintaining reduced memory requirements through quantization and reduced size through LoRA. This allowed for significantly improved responses to biochemistry related questions, and were saved at the following LLM Model sizes: [8.03B](https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-Molecule16), [4.65B](https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-Molecule04). [github](https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Meta-Llama-3-8B-Instruct-Molecule.ipynb)
32
+
33
+ References:
34
+ 1) unsloth: https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit
35
+ 2) zjunlp: https://huggingface.co/datasets/zjunlp/Mol-Instructions
36
+ 3) github: https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Meta-Llama-3-8B-Instruct-Mol.ipynb
37
+ 4) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol16
38
+ 5) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol04
39
 
40
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
41