Liquid AI

LFM2-1.2B-Tool-GGUF

Based on LFM2-1.2B, LFM2-1.2B-Tool is designed for concise and precise tool calling. The key challenge was designing a non-thinking model that outperforms similarly sized thinking models for tool use.

Use cases:

  • Mobile and edge devices requiring instant API calls, database queries, or system integrations without cloud dependency.
  • Real-time assistants in cars, IoT devices, or customer support, where response latency is critical.
  • Resource-constrained environments like embedded systems or battery-powered devices needing efficient tool execution.

You can find more information about other task-specific models in this blog post.

πŸƒ How to run LFM2

Example usage with llama.cpp:

llama-cli -hf LiquidAI/LFM2-1.2B-Tool-GGUF
Downloads last month
514
GGUF
Model size
1.17B params
Architecture
lfm2
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for LiquidAI/LFM2-1.2B-Tool-GGUF

Base model

LiquidAI/LFM2-1.2B
Quantized
(2)
this model

Collection including LiquidAI/LFM2-1.2B-Tool-GGUF