Update README.md
Browse files
README.md
CHANGED
@@ -55,7 +55,7 @@ tags:
|
|
55 |
</a>
|
56 |
</center>
|
57 |
|
58 |
-
# LFM2-
|
59 |
|
60 |
LFM2 is a new generation of hybrid models developed by [Liquid AI](https://www.liquid.ai/), specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
|
61 |
|
@@ -149,7 +149,7 @@ Here is an example of how to generate an answer with transformers in Python:
|
|
149 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
150 |
|
151 |
# Load model and tokenizer
|
152 |
-
model_id = "LiquidAI/LFM2-
|
153 |
model = AutoModelForCausalLM.from_pretrained(
|
154 |
model_id,
|
155 |
device_map="auto",
|
|
|
55 |
</a>
|
56 |
</center>
|
57 |
|
58 |
+
# LFM2-700M
|
59 |
|
60 |
LFM2 is a new generation of hybrid models developed by [Liquid AI](https://www.liquid.ai/), specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
|
61 |
|
|
|
149 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
150 |
|
151 |
# Load model and tokenizer
|
152 |
+
model_id = "LiquidAI/LFM2-700M"
|
153 |
model = AutoModelForCausalLM.from_pretrained(
|
154 |
model_id,
|
155 |
device_map="auto",
|