Liquid AI

LFM2-350M-ENJP-MT-GGUF

Based on the LFM2-350M model, this checkpoint has been fine-tuned for near real-time bi-directional Japanese/English translation of short-to-medium inputs.

Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2-350M-ENJP-MT

πŸƒ How to run LFM2

Example usage with llama.cpp:

Translating to English.

llama-cli -hf LiquidAI/LFM2-350M-ENJP-MT-GGUF -sys "Translate to English." -st

Translate to Japanese.

llama-cli -hf LiquidAI/LFM2-350M-ENJP-MT-GGUF -sys "Translate to Japanese." -st

Quantized model.

llama-cli -hf LiquidAI/LFM2-350M-ENJP-MT-GGUF:Q4_0 -sys "Translate to Japanese." -st
Downloads last month
1,673
GGUF
Model size
354M params
Architecture
lfm2
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for LiquidAI/LFM2-350M-ENJP-MT-GGUF

Quantized
(3)
this model