Liquid AI

LFM2-1.2B-RAG-GGUF

Based on LFM2-1.2B, LFM2-1.2B-RAG is specialized in answering questions based on provided contextual documents, for use in RAG (Retrieval-Augmented Generation) systems.

Use cases:

  • Chatbot to ask questions about the documentation of a particular product.
  • Custom support with an internal knowledge base to provide grounded answers.
  • Academic research assistant with multi-turn conversations about research papers and course materials.

You can find more information about other task-specific models in this blog post.

πŸƒ How to run LFM2

Example usage with llama.cpp:

llama-cli -hf LiquidAI/LFM2-1.2B-RAG-GGUF
Downloads last month
1,545
GGUF
Model size
1.17B params
Architecture
lfm2
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for LiquidAI/LFM2-1.2B-RAG-GGUF

Base model

LiquidAI/LFM2-1.2B
Quantized
(7)
this model

Collection including LiquidAI/LFM2-1.2B-RAG-GGUF