English
naveensp commited on
Commit
e93a94f
·
verified ·
1 Parent(s): cc93944

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -3
README.md CHANGED
@@ -7,13 +7,13 @@ license: apache-2.0
7
 
8
  Multimodal Large Language Models (MM-LLMs) have seen significant advancements in the last year, demonstrating impressive performance across tasks. However, to truly democratize AI, models must exhibit strong capabilities and be able to run efficiently on small compute footprints accessible by most. Part of this quest, we introduce LLaVaOLMoBitnet1B - the first Ternary Multimodal LLM capable of accepting Image(s)+Text inputs to produce coherent textual responses. The model is fully open-sourced along with training scripts to encourage further research in this space. We also release a technical report highlighting the training process, eval details, challenges associated with ternary models and future opportunities.
9
 
10
- Authors: Jainaveen Sundaram, Ravishankar Iyer
11
 
12
 
13
  ### Training details and Evaluation
14
  Two step training pipeline outlined in the LLaVa1.5 paper, consisting of two phases: (1) A Pre-training phase for feature alignment followed by an (2) End-to-end instruction fine-tuning
15
  The pre-training phase involves 1 epoch on a filtered subset of 595K Conceptual Captions [2], with only the projection layer weights updated. For instruction fine-tuning, we use 1 epoch of the LLaVa-Instruct-150K dataset, with both projection layer and LLM weights updated.
16
- For model evaluation, please refer to the linked technical report (coming soon!).
17
 
18
  ### How to use
19
  Start off by cloning the repository:
@@ -57,8 +57,19 @@ Intel is committed to respecting human rights and avoiding causing or contributi
57
  | Use cases | - |
58
 
59
  ## Citation
 
 
 
 
 
 
 
 
 
 
 
 
60
 
61
- Coming soon
62
 
63
  ## License
64
 
 
7
 
8
  Multimodal Large Language Models (MM-LLMs) have seen significant advancements in the last year, demonstrating impressive performance across tasks. However, to truly democratize AI, models must exhibit strong capabilities and be able to run efficiently on small compute footprints accessible by most. Part of this quest, we introduce LLaVaOLMoBitnet1B - the first Ternary Multimodal LLM capable of accepting Image(s)+Text inputs to produce coherent textual responses. The model is fully open-sourced along with training scripts to encourage further research in this space. We also release a technical report highlighting the training process, eval details, challenges associated with ternary models and future opportunities.
9
 
10
+ Authors: Jainaveen Sundaram, Ravi Iyer
11
 
12
 
13
  ### Training details and Evaluation
14
  Two step training pipeline outlined in the LLaVa1.5 paper, consisting of two phases: (1) A Pre-training phase for feature alignment followed by an (2) End-to-end instruction fine-tuning
15
  The pre-training phase involves 1 epoch on a filtered subset of 595K Conceptual Captions [2], with only the projection layer weights updated. For instruction fine-tuning, we use 1 epoch of the LLaVa-Instruct-150K dataset, with both projection layer and LLM weights updated.
16
+ For more details and model evaluation, please refer to the [technical report](https://arxiv.org/abs/2408.13402).
17
 
18
  ### How to use
19
  Start off by cloning the repository:
 
57
  | Use cases | - |
58
 
59
  ## Citation
60
+ ``` python
61
+ @misc{sundaram2024llavaolmobitnet1bternaryllmgoes,
62
+ title={LLaVaOLMoBitnet1B: Ternary LLM goes Multimodal!},
63
+ author={Jainaveen Sundaram and Ravishankar Iyer},
64
+ year={2024},
65
+ eprint={2408.13402},
66
+ archivePrefix={arXiv},
67
+ primaryClass={cs.LG},
68
+ url={https://arxiv.org/abs/2408.13402},
69
+ }
70
+ ```
71
+
72
 
 
73
 
74
  ## License
75